Open standard for authorization
POPULARITY
Shadow IT has evolved. Now it’s Shadow SaaS. Shadow AI. And it’s everywhere. In this week's episode of the KuppingerCole Analyst Chat, Matthias welcomes Matthew Gardiner for his first appearance to unpack one of the fastest-growing security domains: SaaS Security Posture Management (SSPM) and why that name may already be too narrow. Today’s organizations run on hundreds of SaaS applications. Many are sanctioned. Many aren’t. Some are connected via OAuth. Others are quietly leaking data through AI tools. And most security teams don’t have full visibility. In this conversation, we explore:✅ What SSPM actually means (and why the “PM” might be limiting)✅ How Shadow IT evolved into Shadow SaaS and Shadow AI✅ The intersection of identity and cybersecurity in SaaS environments✅ Misconfiguration risks, MFA bypass, OAuth sprawl & SaaS drift✅ Why continuous monitoring beats periodic audits✅ CASB vs SSPM vs CNAPP — where the lines blur✅ The growing governance challenge in AI-powered SaaS✅ Why SaaS security can’t be ignored anymore If your organization uses SaaS (spoiler: it does), this discussion is not optional.
Shadow IT has evolved. Now it’s Shadow SaaS. Shadow AI. And it’s everywhere. In this week's episode of the KuppingerCole Analyst Chat, Matthias welcomes Matthew Gardiner for his first appearance to unpack one of the fastest-growing security domains: SaaS Security Posture Management (SSPM) and why that name may already be too narrow. Today’s organizations run on hundreds of SaaS applications. Many are sanctioned. Many aren’t. Some are connected via OAuth. Others are quietly leaking data through AI tools. And most security teams don’t have full visibility. In this conversation, we explore:✅ What SSPM actually means (and why the “PM” might be limiting)✅ How Shadow IT evolved into Shadow SaaS and Shadow AI✅ The intersection of identity and cybersecurity in SaaS environments✅ Misconfiguration risks, MFA bypass, OAuth sprawl & SaaS drift✅ Why continuous monitoring beats periodic audits✅ CASB vs SSPM vs CNAPP — where the lines blur✅ The growing governance challenge in AI-powered SaaS✅ Why SaaS security can’t be ignored anymore If your organization uses SaaS (spoiler: it does), this discussion is not optional.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
A new Anthropic study shows that AI agents are being used far more conservatively than their capabilities suggest, with short sessions, heavy human oversight, and growing use beyond coding into back office, marketing, sales, and finance. The data highlights that autonomy is shaped as much by trust and interaction design as raw model power. In the headlines: Gemini adds music generation, Anthropic clarifies its OAuth policy, Meta revives its AI smartwatch, Grok expands to 16 debating subagents, and more. Want to build with OpenClaw?LEARN MORE ABOUT CLAW CAMP: https://campclaw.ai/Or for enterprises, check out: https://enterpriseclaw.ai/Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsMercury - modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-bankingRackspace Technology - Build, test and scale intelligent workloads faster with Rackspace AI Launchpad - http://rackspace.com/ailaunchpadBlitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/Optimizely Agents in Action - Join the virtual event (with me!) free March 4 - https://www.optimizely.com/insights/agents-in-action/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Your email gateway isn't enough anymore, attackers are already inside the workspace through OAuth apps, browser extensions, and account takeover. In this episode, Ron sits down with Rajan Kapoor, VP of Security at Material Security, to break down the real risks hiding inside Google Workspace and Microsoft 365. They cover how phishing has evolved into full-blown business email compromise, why malicious OAuth apps are the new favorite attack vector, and what security teams, especially lean ones, can do right now to lock down their cloud workspace. Rajan also drops practical advice on passkeys, document sharing hygiene, and why data lifecycle management is a problem no one is solving well enough. Impactful Moments 00:00 – Introduction 03:30 – The current state of phishing 05:30 – Outbound email compromise risk 09:30 – OAuth apps as attack vectors 15:00 – AI agents accessing your workspace 16:00 – Prompt injection is the new SQL injection 18:00 – Allow listing apps immediately 24:30 – Google Workspace vs Microsoft 365 security 27:30 – Custom detections require API expertise 28:00 – Why passkeys matter right now 32:00 – Data lifecycle management for shared docs Links Connect with our guest, Rajan Kapoor, on LinkedIn: https://www.linkedin.com/in/rajankkapoor/ Learn more about Material Security: https://material.security ___ Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/ Check out our upcoming events: https://www.hackervalley.com/livestreams Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com
In this episode of The Dish on Health IT, host Tony Schueth is joined by co-host Alix Goss and special guest Amy Gleason, Strategic Advisor to Centers for Medicare & Medicaid Services (CMS) and Administrator of the U.S. Department of Government Efficiency (DOGE) Service, for a wide-ranging discussion on how health IT modernization is evolving under a pledge-driven, incentive-backed federal strategy.The conversation begins not with policy, but with lived experience.From Emergency Room to Interoperability AdvocateAmy shares how her early career as an emergency room nurse exposed the dangers of fragmented information. Providers were expected to make critical decisions without access to complete patient histories, while patients, often in pain or distress, were unrealistically asked to recall complex medical details.That professional frustration became deeply personal when her daughter went more than a year without diagnosis for a rare autoimmune disease, juvenile dermatomyositis (JDM). Multiple specialists saw pieces of the puzzle, but no one could see the full picture across charts and settings. Amy reflects that if today's AI tools had been applied to her daughter's complete longitudinal record, the condition may have surfaced sooner.That experience shaped her philosophy. Technology must converge with policy and trust in ways that tangibly improve care.Why Pledges Instead of Rules?Tony presses on a central theme. Amy has argued that we cannot regulate our way to success. Why pursue voluntary pledges instead of federal rulemaking?Amy explains her frustration returning to government in 2025 to find interoperability policies she helped draft in 2020 still not fully effective until 2027. Seven years is an eternity in technology. Meanwhile, the industry had technically complied with numerous mandates including Meaningful Use, Cures Act APIs and CMS interoperability rules, yet many workflows still felt broken.In her view, regulation created a floor but not always real transformation.The CMS Health Tech Ecosystem Pledge was launched as a different model. The federal government used its convening power to articulate a clear vision and challenge industry to deliver minimum viable products within six to twelve months rather than years.Initially announced with roughly 60 companies, the pledge initiative has grown to more than 600 participants collaborating in working groups. The three initial patient-focused use cases include:Improving data interoperability“Killing the clipboard” through digital identity and QR-based sharingLeveraging conversational AI and personalized recommendations for chronic conditions such as diabetes and obesityAmy describes live demonstrations at a Connectathon showing OAuth-enabled data retrieval, QR ingestion into EHR workflows and AI-powered recommendations built on patient data. The goal is not perfection by the first milestone, but real-world minimum viable functionality that can iteratively improve.Alix notes that from the standards community perspective, this approach feels aligned with long-standing calls for industry-driven collaboration, though it remains early to measure widespread impact.Carrots, Sticks and Rural HealthThe discussion turns to incentives.Amy outlines the administration's carrots and sticks strategy:Stick: Enforcement of information blocking, with penalties up to $2 million per occurrenceCarrots: Financial incentives such as the $50 billion Rural Health Transformation Program and the CMS ACCESS Model, which pays for technology-enabled outcomesThe Rural Health Transformation Program directs money to states with expectations that ecosystem-aligned interoperability and app participation be incorporated into funding proposals. CMS retains oversight and clawback authority to ensure funds support rural providers.The ACCESS Model represents a significant shift. Technology-enabled care platforms can register as Medicare Part B providers and be paid for measurable outcomes in tracks such as cardiometabolic disease, musculoskeletal conditions and behavioral health. Providers remain in the loop and receive compensation for referral and care plan oversight.Alix underscores that rural providers face steep financial and workforce constraints. Standards participation, implementation and technology upgrades require resources that are often scarce. The success of these incentives will depend on whether they reduce burden rather than add to it.AI: Evolution, Risk and RealityAI becomes a central thread of the episode.Amy compares AI adoption to autonomous vehicle models. Some scenarios allow tightly controlled automation, such as medication refills, while others require a human in the loop for higher-risk decisions. She points to a Utah prescription refill pilot as an example of bounded automation, where malpractice coverage and clearly defined use cases mitigate risk.When Tony asks who owns risk in this evolving landscape, Amy emphasizes the need for light but clear regulatory pathways rather than fragmented state-by-state oversight.Patients, she notes, are already there. Millions are asking health-related questions weekly through AI tools. The more pressing issue is ensuring those tools are grounded in structured medical data rather than incomplete memory or unverified inputs.She shares a striking story. Her daughter was excluded from a clinical trial due to a misclassification of ulcerative colitis. By uploading her records into an AI model, they identified a more precise diagnosis, microscopic lymphocytic colitis, which did not disqualify her from the trial. For Amy, this demonstrates both the power and inevitability of AI use.Alix adds caution. AI is only as strong as the data beneath it. Dirty, inconsistent and poorly structured data limits performance. Standards and terminologies remain essential to fuel high-fidelity models and safeguard trust.FHIR, Deregulation and the Data FoundationThe conversation addresses an emerging tension. If regulatory burdens are being reduced, does that signal less need for structured standards like FHIR?Amy candidly admits she initially wondered whether AI might reduce the need for FHIR altogether. After discussions with labs and technologists, she concluded the opposite. Standardized data dramatically improves AI performance and reduces error.Deregulation is about removing unnecessary burden, not abandoning foundational data structures.Alix reinforces that FHIR enables discrete, normalized data capture that supports both legacy transactions and AI evolution. While future innovations may emerge, today FHIR remains the backbone for scalable interoperability.Prior Authorization and HIPAA ModernizationThe episode dives into prior authorization modernization across medical and pharmacy domains.Amy notes growing interest among pledge participants to expand into pharmacy prior authorization testing, diagnostic imaging, real-time benefit checks and bulk FHIR performance testing.Alix provides insight into ongoing work within the Designated Standards Maintenance Organizations to incorporate FHIR-based approaches into HIPAA-named standards, particularly for prior authorization. She highlights testing beyond Connectathons, including implementer communities and real-world pilot efforts.Both stress the importance of public comment periods and industry engagement, describing participation as a civic responsibility for health IT professionals.Trust as the Core EnablerThe final segment centers on trust.Amy explains that the ecosystem initiative aims to reinforce trust through:Stronger digital identity verification such as Clear, ID.me and Login.govCertification frameworks such as CARIN and DIME for patient-facing appsA new national provider directory to replace fragmented provider data sourcesTransparency dashboards showing data requests, volumes and purposeRather than replacing frameworks like TEFCA, she describes the pledge model as an accelerator layered above the regulatory floor.Transparency acts as sunlight, enabling visibility into who is accessing data and for what purpose.Final TakeawaysIn closing, Amy urges providers not to sit on the sidelines. Too often, she says, providers feel change is imposed on them. The pledge environment is designed as an open forum where they can directly shape what works or does not work in real workflows.Alix echoes the call. Standards require participation. Organizations must allocate budget and staff to engage, comment and collaborate. It truly takes a village.Tony concludes by framing the episode's core message. Regulation establishes baseline expectations, but voluntary movements can demonstrate what is possible before mandates reach the Federal Register.Across pledges, payment reform, AI evolution and trust frameworks, the episode underscores a consistent theme. Modernization in health IT depends not only on policy direction, but on shared accountability and active participation from every stakeholder in the ecosystem.Listeners are reminded that POCP is available to support organizations in understanding the implications of federal initiatives, enforcement priorities and their strategic implications. Reach out to us to set up an initial consultation. The episode closes, as always, with the reminder that Health IT is a dish best served hot.Prefer video? Catch episodes on the POCP YouTube channel
In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Lars van der Zande, founder and CEO/technical architect of Inkwell Finance, for what Lars describes as his first-ever podcast appearance. The conversation covers a wide range of blockchain infrastructure topics, including Lars's work with Sui and Solana blockchains, the innovative capabilities of Ika's programmatic wallets and blockchain of signatures, and how Inkwell Finance is building revenue-based financing solutions for on-chain entities—from AI agents to protocols. They explore the evolving landscape of crypto regulation, the merging of traditional finance with blockchain technology, the future of decentralized legal systems, and how the user experience barrier is being lowered through technologies that eliminate constant transaction signing. Lars also discusses Inkwell's embedded financing approach and their pre-seed fundraising round.Links mentioned:- Inkwell's website: inkwell.finance- Inkwell on Twitter: @__inkwell- Lars on Twitter: @LMVDZandeTimestamps00:00 Introduction to Inkwell Finance and Technical Architecture02:06 Understanding Sui and Solana: Blockchain Dynamics05:55 The Role of Ika in Inkwell Finance11:51 Leviathan: Revenue Generation and Financing in Crypto17:38 The Future of AI Agents and Programmatic Wallets23:23 Smart Contracts: Legal Implications and Future Directions25:06 The Future of Inqvil Finance25:42 Decentralization and Its Evolution27:32 The Merging of Traditional and Crypto Systems29:33 Global Financial Dynamics and Market Reactions31:48 The Collapse of Traditional Financial Systems32:46 Jurisdictional Shifts in the Crypto World33:59 Legal Systems and Blockchain Integration35:57 On-Chain Credit and Financial Opportunities39:29 The Role of AI in Finance41:30 Learning from Peer-to-Peer Lending History43:14 Disruption in Insurance and Risk Management44:54 On-Chain vs Off-Chain Data46:54 The Evolution of the Internet and Blockchain49:12 Future Subscription Models in BlockchainKey Insights1. Ika's Revolutionary Blockchain Signature Technology: Lars discovered Ika, a blockchain of signatures built on Sui that enables any blockchain transaction to be signed without revealing the underlying message. Using patented 2PC MPC technology, Ika splits key shares across validators and encrypts them in transit, performing complex cryptographic operations that allow smart contracts on Sui to generate signatures for transactions on any other blockchain. This eliminates the need to build separate smart contracts on each blockchain, fundamentally changing how cross-chain interactions work and opening possibilities for truly interoperable decentralized applications.2. Programmatic Wallets vs Traditional Wallets: Traditional wallets like MetaMask require manual user approval for every transaction through a front-end interface, but Ika's D-wallet introduces programmatic wallets with policy-based controls embedded in smart contracts. These wallets can execute transactions based on predetermined conditions checked against on-chain data like Oracle prices, without requiring individual user signatures. For example, a Bitcoin D-wallet can hold native Bitcoin without wrapping or bridging to a custodian, and smart contract policies determine when and how that Bitcoin can be transferred, creating unprecedented security and automation possibilities for decentralized finance.3. Inkwell's Revenue-Based Financing Model: Inkwell Finance is building Leviathan, a revenue-based financing platform for on-chain entities including protocols, AI agents, and individual traders with verifiable track records. Borrowers receive capital based on their on-chain performance metrics like sharp ratio and drawdown, with loan repayment automatically deducted from their revenue stream. The profit split structure allocates approximately 60% to borrowers, 30% to lenders, and 10% split between Inkwell and integrating platforms. This creates a sustainable lending model where flight risk is minimized through D-wallet policy controls that restrict how borrowed capital can be used.4. Wallet-as-a-Protocol and the Future of User Experience: The crypto industry is moving toward embedded wallet solutions that eliminate the friction of traditional wallet management, with Wallet-as-a-Protocol representing the next evolution beyond services like Privy and Dynamic. Unlike current embedded wallets that lock users into specific applications, Wallet-as-a-Protocol enables single sign-on across multiple applications while users maintain control of their keys. Combined with app-sponsored gas fees, this approach allows non-crypto-native users to interact with blockchain applications without knowing they're using crypto, removing the biggest barrier to mainstream adoption and creating web2-like user experiences on web3 infrastructure.5. AI Agents as Financial Entities: AI agents are emerging as revenue-generating entities with on-chain transaction histories that create verifiable track records for creditworthiness assessment. Inkwell Finance is specifically targeting this market, recognizing that AI agents will need wallets and capital to operate effectively. The programmatic nature of D-wallets pairs perfectly with AI agents, as policy controls can restrict agent behavior to specific smart contract interactions, preventing unauthorized fund transfers while allowing automated trading or revenue generation. This creates a new category of borrower that operates 24/7 with completely transparent performance metrics, fundamentally different from traditional loan recipients.6. Cross-Chain Liquidity Without Asset Transfer: Ika's technology enables users to take loans against revenue generated on one blockchain and deploy that capital on entirely different blockchains without moving their original liquidity positions. For instance, someone earning yield on Sui's Fusol protocol could borrow against that revenue stream and deploy capital on Solana opportunities, effectively creating multiple on-chain businesses that generate their own credit scores and revenue to service debt. This ability to read state across different blockchains from within smart contracts opens possibilities for multi-chain strategies that don't require withdrawing capital from productive positions, maximizing capital efficiency across the entire crypto ecosystem.7. The Convergence of Traditional Finance and Crypto Infrastructure: The regulatory landscape is rapidly evolving with initiatives like the Genius Act and Clarity Act creating frameworks where traditional financial systems merge with crypto infrastructure through mechanisms like stablecoins backed by US treasuries. Companies are increasingly establishing entities in the United States to access capital networks and Delaware's established legal framework while issuing tokens through jurisdictions like Switzerland. This hybrid approach, combined with emerging concepts like Gabriel Shapiro's "cybernetic agreements" that make smart contract parameters legally enforceable in traditional courts, suggests the future isn't pure decentralization but rather a sophisticated integration of on-chain and off-chain legal and financial systems.
Send us a textA surprising number of security leaders admit they're flying blind on hardware and firmware. We start by exposing how shared BIOS passwords, slow maintenance cycles, and careless e‑waste practices create avoidable risk, then lay out the fixes: privileged vaulting, disciplined asset disposition, and practical ways to repurpose gear without leaking data. That real-world foundation sets the stage for a focused tour through CISSP Domain 5—Identity and Access Management—built for practitioners who want clarity over jargon.We break down least privilege in plain terms and show how to reduce the initial friction with cleanly defined roles and entitlement catalogs. From there, we compare RBAC and ABAC: when baseline roles are enough, and when context-aware attributes like device, location, and data sensitivity should drive policy. Authentication gets the same treatment. Multi-factor authentication, biometrics, and phishing-resistant methods raise the bar, while single sign-on and identity federation streamline access across cloud apps using standards like OAuth, OpenID Connect, and SAML. In modern cloud environments, token-based models win for scalability and security, and we explain why.Governance ties it all together. We walk through identity proofing for solid onboarding, separation of duties to curb fraud, and IGA workflows that make approvals, recertifications, and audits far less painful. Regular access reviews emerge as the unsung hero that prevents privilege creep before it becomes an incident. If you're prepping for the CISSP—or just tightening your IAM program—this episode gives you the why behind the what, with steps you can apply today.Enjoyed the conversation and want more deep dives? Subscribe, share with a teammate who needs a quick IAM refresher, and leave a review to help others find the show.Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
OAuth is a widely used authorization (not authentication) protocol that lets a resource owner grant access to a resource using access tokens. These tokens define access attributes, including scope and length of time. OAuth can be used to grant access to human and non-human entities (for example, AI agents). OAuth is increasingly being abused by... Read more »
OAuth is a widely used authorization (not authentication) protocol that lets a resource owner grant access to a resource using access tokens. These tokens define access attributes, including scope and length of time. OAuth can be used to grant access to human and non-human entities (for example, AI agents). OAuth is increasingly being abused by... Read more »
In this Risky Business News sponsor interview, Catalin Cimpanu talks with Luke Jennings, VP of Research & Development at Push Security, about ConsentFix. It's a new form of email-based social engineering attack used in the wild, an evolution of the ClickFix attack that goes after your identity. Show notes ConsentFix: Analysing a browser-native ClickFix-style attack that hijacks OAuth consent grants ConsentFix debrief: latest community insights, recommendations, and predictions Luke Jennings, ConsentFix LinkedIn post Year in Review: How Phishing Attacks Evolved in 2025
De retour à cinq dans l'épisode, les cast codeurs démarrent cette année avec un gros épisode pleins de news et d'articles de fond. IA bien sûr, son impact sur les pratiques, Mockito qui tourne un page, du CSS (et oui), sur le (non) mapping d'APIs REST en MCP et d'une palanquée d'outils pour vous. Enregistré le 9 janvier 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-335.mp3 ou en vidéo sur YouTube. News Langages 2026 sera-t'elle l'année de Java dans le terminal ? (j'ai ouïe dire que ça se pourrait bien…) https://xam.dk/blog/lets-make-2026-the-year-of-java-in-the-terminal/ 2026: Année de Java dans le terminal, pour rattraper son retard sur Python, Rust, Go et Node.js. Java est sous-estimé pour les applications CLI et les TUIs (interfaces utilisateur terminales) malgré ses capacités. Les anciennes excuses (démarrage lent, outillage lourd, verbosité, distribution complexe) sont obsolètes grâce aux avancées récentes : GraalVM Native Image pour un démarrage en millisecondes. JBang pour l'exécution simplifiée de scripts Java (fichiers uniques, dépendances) et de JARs. JReleaser pour l'automatisation de la distribution multi-plateforme (Homebrew, SDKMAN, Docker, images natives). Project Loom pour la concurrence facile avec les threads virtuels. PicoCLI pour la gestion des arguments. Le potentiel va au-delà des scripts : création de TUIs complètes et esthétiques (ex: dashboards, gestionnaires de fichiers, assistants IA). Excuses caduques : démarrage rapide (GraalVM), légèreté (JBang), distribution simple (JReleaser), concurrence (Loom). Potentiel : créer des applications TUI riches et esthétiques. Sortie de Ruby 4.0.0 https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/ Ruby Box (expérimental) : Une nouvelle fonctionnalité permettant d'isoler les définitions (classes, modules, monkey patches) dans des boîtes séparées pour éviter les conflits globaux. ZJIT : Un nouveau compilateur JIT de nouvelle génération développé en Rust, visant à surpasser YJIT à terme (actuellement en phase expérimentale). Améliorations de Ractor : Introduction de Ractor::Port pour une meilleure communication entre Ractors et optimisation des structures internes pour réduire les contentions de verrou global. Changements syntaxiques : Les opérateurs logiques (||, &&, and, or) en début de ligne permettent désormais de continuer la ligne précédente, facilitant le style "fluent". Classes Core : Set et Pathname deviennent des classes intégrées (Core) au lieu d'être dans la bibliothèque standard. Diagnostics améliorés : Les erreurs d'arguments (ArgumentError) affichent désormais des extraits de code pour l'appelant ET la définition de la méthode. Performances : Optimisation de Class#new, accès plus rapide aux variables d'instance et améliorations significatives du ramasse-miettes (GC). Nettoyage : Suppression de comportements obsolètes (comme la création de processus via IO.open avec |) et mise à jour vers Unicode 17.0. Librairies Introduction pour créer une appli multi-tenant avec Quarkus et http://nip.io|nip.io https://www.the-main-thread.com/p/quarkus-multi-tenant-api-nipio-tutorial Construction d'une API REST multi-tenant en Quarkus avec isolation par sous-domaine Utilisation de http://nip.io|nip.io pour la résolution DNS automatique sans configuration locale Extraction du tenant depuis l'en-tête HTTP Host via un filtre JAX-RS Contexte tenant géré avec CDI en scope Request pour l'isolation des données Service applicatif gérant des données spécifiques par tenant avec Map concurrent Interface web HTML/JS pour visualiser et ajouter des données par tenant Configuration CORS nécessaire pour le développement local Pattern acme.127-0-0-1.nip.io résolu automatiquement vers localhost Code complet disponible sur GitHub avec exemples curl et tests navigateur Base idéale pour prototypage SaaS, tests multi-tenants Hibernate 7.2 avec quelques améliorations intéressantes https://docs.hibernate.org/orm/7.2/whats-new/%7Bhtml-meta-canonical-link%7D read only replica (experimental), crée deux session factories et swap au niveau jdbc si le driver le supporte et custom sinon. On ouvre une session en read only child statelesssession (partage le contexte transactionnel) hibernate vector module ajouter binary, float16 and sparse vectors Le SchemaManager peut resynchroniser les séquences par rapport aux données des tables Regexp dans HQL avec like Nouvelle version de Hibernate with Panache pour Quarkus https://quarkus.io/blog/hibernate-panache-next/ Nouvelle extension expérimentale qui unifie Hibernate ORM with Panache et Hibernate Reactive with Panache Les entités peuvent désormais fonctionner en mode bloquant ou réactif sans changer de type de base Support des sessions sans état (StatelessSession) en plus des entités gérées traditionnelles Intégration de Jakarta Data pour des requêtes type-safe vérifiées à la compilation Les opérations sont définies dans des repositories imbriqués plutôt que des méthodes statiques Possibilité de définir plusieurs repositories pour différents modes d'opération sur une même entité Accès aux différents modes (bloquant/réactif, géré/sans état) via des méthodes de supertype Support des annotations @Find et @HQL pour générer des requêtes type-safe Accès au repository via injection ou via le métamodèle généré Extension disponible dans la branche main, feedback demandé sur Zulip ou GitHub Spring Shell 4.0.0 GA publié - https://spring.io/blog/2025/12/30/spring-shell-4-0-0-ga-released Sortie de la version finale de Spring Shell 4.0.0 disponible sur Maven Central Compatible avec les dernières versions de Spring Framework et Spring Boot Modèle de commandes revu pour simplifier la création d'applications CLI interactives Intégration de jSpecify pour améliorer la sécurité contre les NullPointerException Architecture plus modulaire permettant meilleure personnalisation et extension Documentation et exemples entièrement mis à jour pour faciliter la prise en main Guide de migration vers la v4 disponible sur le wiki du projet Corrections de bugs pour améliorer la stabilité et la fiabilité Permet de créer des applications Java autonomes exécutables avec java -jar ou GraalVM native Approche opinionnée du développement CLI tout en restant flexible pour les besoins spécifiques Une nouvelle version de la librairie qui implémenter des gatherers supplémentaires à ceux du JDK https://github.com/tginsberg/gatherers4j/releases/tag/v0.13.0 gatherers4j v0.13.0. Nouveaux gatherers : uniquelyOccurringBy(), moving/runningMedian(), moving/runningMax/Min(). Changement : les gatherers "moving" incluent désormais par défaut les valeurs partielles (utiliser excludePartialValues() pour désactiver). LangChain4j 1.10.0 https://github.com/langchain4j/langchain4j/releases/tag/1.10.0 Introduction d'un catalogue de modèles pour Anthropic, Gemini, OpenAI et Mistral. Ajout de capacités d'observabilité et de monitoring pour les agents. Support des sorties structurées, des outils avancés et de l'analyse de PDF via URL pour Anthropic. Support des services de transcription pour OpenAI. Possibilité de passer des paramètres de configuration de chat en argument des méthodes. Nouveau garde-fou de modération pour les messages entrants. Support du contenu de raisonnement pour les modèles. Introduction de la recherche hybride. Améliorations du client MCP. Départ du lead de mockito après 10 ans https://github.com/mockito/mockito/issues/3777 Tim van der Lippe, mainteneur majeur de Mockito, annonce son départ pour mars 2026, marquant une décennie de contribution au projet. L'une des raisons principales est l'épuisement lié aux changements récents dans la JVM (JVM 22+) concernant les agents, imposant des contraintes techniques lourdes sans alternative simple proposée par les mainteneurs du JDK. Il pointe du doigt le manque de soutien et la pression exercée sur les bénévoles de l'open source lors de ces transitions technologiques majeures. La complexité croissante pour supporter Kotlin, qui utilise la JVM de manière spécifique, rend la base de code de Mockito plus difficile à maintenir et moins agréable à faire évoluer selon lui. Il exprime une perte de plaisir et préfère désormais consacrer son temps libre à d'autres projets comme Servo, un moteur web écrit en Rust. Une période de transition est prévue jusqu'en mars pour assurer la passation de la maintenance à de nouveaux contributeurs. Infrastructure Le premier intérêt de Kubernetes n'est pas le scaling - https://mcorbin.fr/posts/2025-12-29-kubernetes-scale/ Avant Kubernetes, gérer des applications en production nécessitait de multiples outils complexes (Ansible, Puppet, Chef) avec beaucoup de configuration manuelle Le load balancing se faisait avec HAProxy et Keepalived en actif/passif, nécessitant des mises à jour manuelles de configuration à chaque changement d'instance Le service discovery et les rollouts étaient orchestrés manuellement, instance par instance, sans automatisation de la réconciliation Chaque stack (Java, Python, Ruby) avait sa propre méthode de déploiement, sans standardisation (rpm, deb, tar.gz, jar) La gestion des ressources était manuelle avec souvent une application par machine, créant du gaspillage et complexifiant la maintenance Kubernetes standardise tout en quelques ressources YAML (Deployment, Service, Ingress, ConfigMap, Secret) avec un format déclaratif simple Toutes les fonctionnalités critiques sont intégrées : service discovery, load balancing, scaling, stockage, firewalling, logging, tolérance aux pannes La complexité des centaines de scripts shell et playbooks Ansible maintenus avant était supérieure à celle de Kubernetes Kubernetes devient pertinent dès qu'on commence à reconstruire manuellement ces fonctionnalités, ce qui arrive très rapidement La technologie est flexible et peut gérer aussi bien des applications modernes que des monolithes legacy avec des contraintes spécifiques Mole https://github.com/tw93/Mole Un outil en ligne de commande (CLI) tout-en-un pour nettoyer et optimiser macOS. Combine les fonctionnalités de logiciels populaires comme CleanMyMac, AppCleaner, DaisyDisk et iStat Menus. Analyse et supprime en profondeur les caches, les fichiers logs et les résidus de navigateurs. Désinstallateur intelligent qui retire proprement les applications et leurs fichiers cachés (Launch Agents, préférences). Analyseur d'espace disque interactif pour visualiser l'occupation des fichiers et gérer les documents volumineux. Tableau de bord temps réel (mo status) pour surveiller le CPU, le GPU, la mémoire et le réseau. Fonction de purge spécifique pour les développeurs permettant de supprimer les artefacts de build (node_modules, target, etc.). Intégration possible avec Raycast ou Alfred pour un lancement rapide des commandes. Installation simple via Homebrew ou un script curl. Des images Docker sécurisées pour chaque développeur https://www.docker.com/blog/docker-hardened-images-for-every-developer/ Docker rend ses "Hardened Images" (DHI) gratuites et open source (licence Apache 2.0) pour tous les développeurs. Ces images sont conçues pour être minimales, prêtes pour la production et sécurisées dès le départ afin de lutter contre l'explosion des attaques sur la chaîne logistique logicielle. Elles s'appuient sur des bases familières comme Alpine et Debian, garantissant une compatibilité élevée et une migration facile. Chaque image inclut un SBOM (Software Bill of Materials) complet et vérifiable, ainsi qu'une provenance SLSA de niveau 3 pour une transparence totale. L'utilisation de ces images permet de réduire considérablement le nombre de vulnérabilités (CVE) et la taille des images (jusqu'à 95 % plus petites). Docker étend cette approche sécurisée aux graphiques Helm et aux serveurs MCP (Mongo, Grafana, GitHub, etc.). Des offres commerciales (DHI Enterprise) restent disponibles pour des besoins spécifiques : correctifs critiques sous 7 jours, support FIPS/FedRAMP ou support à cycle de vie étendu (ELS). Un assistant IA expérimental de Docker peut analyser les conteneurs existants pour recommander l'adoption des versions sécurisées correspondantes. L'initiative est soutenue par des partenaires majeurs tels que Google, MongoDB, Snyk et la CNCF. Web La maçonnerie ("masonry") arrive dans la spécification des CSS et commence à être implémentée par les navigateurs https://webkit.org/blog/17660/introducing-css-grid-lanes/ Permet de mettre en colonne des éléments HTML les uns à la suite des autres. D'abord sur la première ligne, et quand la première ligne est remplie, le prochain élément se trouvera dans la colonne où il pourra être le plus haut possible, et ainsi de suite. après la plomberie du middleware, la maçonnerie du front :laughing: Data et Intelligence Artificielle On ne devrait pas faire un mapping 1:1 entre API REST et MCP https://nordicapis.com/why-mcp-shouldnt-wrap-an-api-one-to-one/ Problématique : Envelopper une API telle quelle dans le protocole MCP (Model Context Protocol) est un anti-pattern. Objectif du MCP : Conçu pour les agents d'IA, il doit servir d'interface d'intention, non de miroir d'API. Les agents comprennent les tâches, pas la logique complexe des API (authentification, pagination, orchestration). Conséquences du mappage un-à-un : Confusion des agents, erreurs, hallucinations. Difficulté à gérer les orchestrations complexes (plusieurs appels pour une seule action). Exposition des faiblesses de l'API (schéma lourd, endpoints obsolètes). Maintenance accrue lors des changements d'API. Meilleure approche : Construire des outils MCP comme des SDK pour agents, encapsulant la logique nécessaire pour accomplir une tâche spécifique. Pratiques recommandées : Concevoir autour des intentions/actions utilisateur (ex. : "créer un projet", "résumer un document"). Regrouper les appels en workflows ou actions uniques. Utiliser un langage naturel pour les définitions et les noms. Limiter la surface d'exposition de l'API pour la sécurité et la clarté. Appliquer des schémas d'entrée/sortie stricts pour guider l'agent et réduire l'ambiguïté. Des agents en production avec AWS - https://blog.ippon.fr/2025/12/22/des-agents-en-production-avec-aws/ AWS re:Invent 2025 a massivement mis en avant l'IA générative et les agents IA Un agent IA combine un LLM, une boucle d'appel et des outils invocables Strands Agents SDK facilite le prototypage avec boucles ReAct intégrées et gestion de la mémoire Managed MLflow permet de tracer les expérimentations et définir des métriques de performance Nova Forge optimise les modèles par réentraînement sur données spécifiques pour réduire coûts et latence Bedrock Agent Core industrialise le déploiement avec runtime serverless et auto-scaling Agent Core propose neuf piliers dont observabilité, authentification, code interpreter et browser managé Le protocole MCP d'Anthropic standardise la fourniture d'outils aux agents SageMaker AI et Bedrock centralisent l'accès aux modèles closed source et open source via API unique AWS mise sur l'évolution des chatbots vers des systèmes agentiques optimisés avec modèles plus frugaux Debezium 3.4 amène plusieurs améliorations intéressantes https://debezium.io/blog/2025/12/16/debezium-3-4-final-released/ Correction du problème de calcul du low watermark Oracle qui causait des pertes de performance Correction de l'émission des événements heartbeat dans le connecteur Oracle avec les requêtes CTE Amélioration des logs pour comprendre les transactions actives dans le connecteur Oracle Memory guards pour protéger contre les schémas de base de données de grande taille Support de la transformation des coordonnées géométriques pour une meilleure gestion des données spatiales Extension Quarkus DevServices permettant de démarrer automatiquement une base de données et Debezium en dev Intégration OpenLineage pour tracer la lignée des données et suivre leur flux à travers les pipelines Compatibilité testée avec Kafka Connect 4.1 et Kafka brokers 4.1 Infinispan 16.0.4 et .5 https://infinispan.org/blog/2025/12/17/infinispan-16-0-4 Spring Boot 4 et Spring 7 supportés Evolution dans les metriques Deux bugs de serialisation Construire un agent de recherche en Java avec l'API Interactions https://glaforge.dev/posts/2026/01/03/building-a-research-assistant-with-the-interactions-api-in-java/ Assistant de recherche IA Java (API Interactions Gemini), test du SDK implémenté par Guillaume. Workflow en 4 phases : Planification : Gemini Flash + Google Search. Recherche : Modèle "Deep Research" (tâche de fond). Synthèse : Gemini Pro (rapport exécutif). Infographie : Nano Banana Pro (à partir de la synthèse). API Interactions : gestion d'état serveur, tâches en arrière-plan, réponses multimodales (images). Appréciation : gestion d'état de l'API (vs LLM sans état). Validation : efficacité du SDK Java pour cas complexes. Stephan Janssen (le papa de Devoxx) a créé un serveur MCP (Model Context Protocol) basé sur LSP (Language Server Protocol) pour que les assistants de code analysent le code en le comprenant vraiment plutôt qu'en faisant des grep https://github.com/stephanj/LSP4J-MCP Le problème identifié : Les assistants IA utilisent souvent la recherche textuelle (type grep) pour naviguer dans le code, ce qui manque de contexte sémantique, génère du bruit (faux positifs) et consomme énormément de tokens inutilement. La solution LSP4J-MCP : Une approche "standalone" (autonome) qui encapsule le serveur de langage Eclipse (JDTLS) via le protocole MCP (Model Context Protocol). Avantage principal : Offre une compréhension sémantique profonde du code Java (types, hiérarchies, références) sans nécessiter l'ouverture d'un IDE lourd comme IntelliJ. Comparaison des méthodes : AST : Trop léger (pas de compréhension inter-fichiers). IntelliJ MCP : Puissant mais exige que l'IDE soit ouvert (gourmand en ressources). LSP4J-MCP : Le meilleur des deux mondes pour les workflows en terminal, à distance (SSH) ou CI/CD. Fonctionnalités clés : Expose 5 outils pour l'IA (find_symbols, find_references, find_definition, document_symbols, find_interfaces_with_method). Résultats : Une réduction de 100x des tokens utilisés pour la navigation et une précision accrue (distinction des surcharges, des scopes, etc.). Disponibilité : Le projet est open source et disponible sur GitHub pour intégration immédiate (ex: avec Claude Code, Gemini CLI, etc). A noter l'ajout dans claude code 2.0.74 d'un tool pour supporter LSP ( https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2074 ) Awesome (GitHub) Copilot https://github.com/github/awesome-copilot Une collection communautaire d'instructions, de prompts et de configurations pour optimiser l'utilisation de GitHub Copilot. Propose des "Agents" spécialisés qui s'intègrent aux serveurs MCP pour améliorer les flux de travail spécifiques. Inclut des prompts ciblés pour la génération de code, la documentation et la résolution de problèmes complexes. Fournit des instructions détaillées sur les standards de codage et les meilleures pratiques applicables à divers frameworks. Propose des "Skills" (compétences) sous forme de dossiers contenant des ressources pour des tâches techniques spécialisées. (les skills sont dispo dans copilot depuis un mois : https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/ ) Permet une installation facile via un serveur MCP dédié, compatible avec VS Code et Visual Studio. Encourage la contribution communautaire pour enrichir les bibliothèques de prompts et d'agents. Aide à augmenter la productivité en offrant des solutions pré-configurées pour de nombreux langages et domaines. Garanti par une licence MIT et maintenu activement par des contributeurs du monde entier. IA et productivité : bilan de l'année 2025 (Laura Tacho - DX)) https://newsletter.getdx.com/p/ai-and-productivity-year-in-review?aid=recNfypKAanQrKszT En 2025, l'ingénierie assistée par l'IA est devenue la norme : environ 90 % des développeurs utilisent des outils d'IA mensuellement, et plus de 40 % quotidiennement. Les chercheurs (Microsoft, Google, GitHub) soulignent que le nombre de lignes de code (LOC) reste un mauvais indicateur d'impact, car l'IA génère beaucoup de code sans forcément garantir une valeur métier supérieure. Si l'IA améliore l'efficacité individuelle, elle pourrait nuire à la collaboration à long terme, car les développeurs passent plus de temps à "parler" à l'IA qu'à leurs collègues. L'identité du développeur évolue : il passe de "producteur de code" à un rôle de "metteur en scène" qui délègue, valide et exerce son jugement stratégique. L'IA pourrait accélérer la montée en compétences des développeurs juniors en les forçant à gérer des projets et à déléguer plus tôt, agissant comme un "accélérateur" plutôt que de les rendre obsolètes. L'accent est mis sur la créativité plutôt que sur la simple automatisation, afin de réimaginer la manière de travailler et d'obtenir des résultats plus impactants. Le succès en 2026 dépendra de la capacité des entreprises à cibler les goulots d'étranglement réels (dette technique, documentation, conformité) plutôt que de tester simplement chaque nouveau modèle d'IA. La newsletter avertit que les titres de presse simplifient souvent à l'excès les recherches sur l'IA, masquant parfois les nuances cruciales des études réelles. Un développeur décrit dans un article sur Twitter son utilisation avancée de Claude Code pour le développement, avec des sous-agents, des slash-commands, comment optimiser le contexte, etc. https://x.com/AureaLibe/status/2008958120878330329?s=20 Outillage IntelliJ IDEA, thread dumps et project Loom (virtual threads) - https://blog.jetbrains.com/idea/2025/12/thread-dumps-and-project-loom-virtual-threads/ Les virtual threads Java améliorent l'utilisation du matériel pour les opérations I/O parallèles avec peu de changements de code Un serveur peut maintenant gérer des millions de threads au lieu de quelques centaines Les outils existants peinent à afficher et analyser des millions de threads simultanément Le débogage asynchrone est complexe car le scheduler et le worker s'exécutent dans des threads différents Les thread dumps restent essentiels pour diagnostiquer deadlocks, UI bloquées et fuites de threads Netflix a découvert un deadlock lié aux virtual threads en analysant un heap dump, bug corrigé dans Java 25. Mais c'était de la haute voltige IntelliJ IDEA supporte nativement les virtual threads dès leur sortie avec affichage des locks acquis IntelliJ IDEA peut ouvrir des thread dumps générés par d'autres outils comme jcmd Le support s'étend aussi aux coroutines Kotlin en plus des virtual threads Quelques infos sur IntelliJ IDEA 2025.3 https://blog.jetbrains.com/idea/2025/12/intellij-idea-2025-3/ Distribution unifiée regroupant davantage de fonctionnalités gratuites Amélioration de la complétion des commandes dans l'IDE Nouvelles fonctionnalités pour le débogueur Spring Thème Islands devient le thème par défaut Support complet de Spring Boot 4 et Spring Framework 7 Compatibilité avec Java 25 Prise en charge de Spring Data JDBC et Vitest 4 Support natif de Junie et Claude Agent pour l'IA Quota d'IA transparent et option Bring Your Own Key à venir Corrections de stabilité, performance et expérience utilisateur Plein de petits outils en ligne pour le développeur https://blgardner.github.io/prism.tools/ génération de mot de passe, de gradient CSS, de QR code encodage décodage de Base64, JWT formattage de JSON, etc. resumectl - Votre CV en tant que code https://juhnny5.github.io/resumectl/ Un outil en ligne de commande (CLI) écrit en Go pour générer un CV à partir d'un fichier YAML. Permet l'exportation vers plusieurs formats : PDF, HTML, ou un affichage direct dans le terminal. Propose 5 thèmes intégrés (Modern, Classic, Minimal, Elegant, Tech) personnalisables avec des couleurs spécifiques. Fonctionnalité d'initialisation (resumectl init) permettant d'importer automatiquement des données depuis LinkedIn et GitHub (projets les plus étoilés). Supporte l'ajout de photos avec des options de filtre noir et blanc ou de forme (rond/carré). Inclut un mode "serveur" (resumectl serve) pour prévisualiser les modifications en temps réel via un navigateur local. Fonctionne comme un binaire unique sans dépendances externes complexes pour les modèles. mactop - Un moniteur "top" pour Apple Silicon https://github.com/metaspartan/mactop Un outil de surveillance en ligne de commande (TUI) conçu spécifiquement pour les puces Apple Silicon (M1, M2, M3, M4, M5). Permet de suivre en temps réel l'utilisation du CPU (E-cores et P-cores), du GPU et de l'ANE (Neural Engine). Affiche la consommation électrique (wattage) du système, du CPU, du GPU et de la DRAM. Fournit des données sur les températures du SoC, les fréquences du GPU et l'état thermique global. Surveille l'utilisation de la mémoire vive, de la swap, ainsi que l'activité réseau et disque (E/S). Propose 10 mises en page (layouts) différentes et plusieurs thèmes de couleurs personnalisables. Ne nécessite pas l'utilisation de sudo car il s'appuie sur les API natives d'Apple (SMC, IOReport, IOKit). Inclut une liste de processus détaillée (similaire à htop) avec la possibilité de tuer des processus directement depuis l'interface. Offre un mode "headless" pour exporter les métriques au format JSON et un serveur optionnel pour Prometheus. Développé en Go avec des composants en CGO et Objective-C. Adieu direnv, Bonjour misehttps://codeka.io/2025/12/19/adieu-direnv-bonjour-mise/ L'auteur remplace ses outils habituels (direnv, asdf, task, just) par un seul outil polyvalent écrit en Rust : mise. mise propose trois fonctions principales : gestionnaire de paquets (langages et outils), gestionnaire de variables d'environnement et exécuteur de tâches. Contrairement à direnv, il permet de gérer des alias et utilise un fichier de configuration structuré (mise.toml) plutôt que du scripting shell. La configuration est hiérarchique, permettant de surcharger les paramètres selon les répertoires, avec un système de "trust" pour la sécurité. Une "killer-feature" soulignée est la gestion des secrets : mise s'intègre avec age pour chiffrer des secrets (via clés SSH) directement dans le fichier de configuration. L'outil supporte une vaste liste de langages et d'outils via un registre interne et des plugins (compatibilité avec l'écosystème asdf). Il simplifie le workflow de développement en regroupant l'installation des outils et l'automatisation des tâches au sein d'un même fichier. L'auteur conclut sur la puissance, la flexibilité et les excellentes performances de l'outil après quelques heures de test. Claude Code v2.1.0 https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#210 Rechargement à chaud des "skills" : Les modifications apportées aux compétences dans ~/.claude/skills sont désormais appliquées instantanément sans redémarrer la session. Sous-agents et forks : Support de l'exécution de compétences et de commandes slash dans un contexte de sous-agent forké via context: fork. Réglages linguistiques : Ajout d'un paramètre language pour configurer la langue de réponse par défaut (ex: language: "french"). Améliorations du terminal : Shift+Enter fonctionne désormais nativement dans plusieurs terminaux (iTerm2, WezTerm, Ghostty, Kitty) sans configuration manuelle. Sécurité et correction de bugs : Correction d'une faille où des données sensibles (clés API, tokens OAuth) pouvaient apparaître dans les logs de débogage. Nouvelles commandes slash : Ajout de /teleport et /remote-env pour les abonnés claude.ai afin de gérer des sessions distantes. Mode Plan : Le raccourci /plan permet d'activer le mode plan directement depuis le prompt, et la demande de permission à l'entrée de ce mode a été supprimée. Vim et navigation : Ajout de nombreux mouvements Vim (text objects, répétitions de mouvements f/F/t/T, indentations, etc.). Performance : Optimisation du temps de démarrage et du rendu terminal pour les caractères Unicode/Emoji. Gestion du gitignore : Support du réglage respectGitignore dans settings.json pour contrôler le comportement du sélecteur de fichiers @-mention. Méthodologies 200 déploiements en production par jour, même le vendredi : retours d'expérience https://mcorbin.fr/posts/2025-03-21-deploy-200/ Le déploiement fréquent, y compris le vendredi, est un indicateur de maturité technique et augmente la productivité globale. L'excellence technique est un atout stratégique indispensable pour livrer rapidement des produits de qualité. Une architecture pragmatique orientée services (SOA) facilite les déploiements indépendants et réduit la charge cognitive. L'isolation des services est cruciale : un développeur doit pouvoir tester son service localement sans dépendre de toute l'infrastructure. L'automatisation via Kubernetes et l'approche GitOps avec ArgoCD permettent des déploiements continus et sécurisés. Les feature flags et un système de permissions solide permettent de découpler le déploiement technique de l'activation fonctionnelle pour les utilisateurs. L'autonomie des développeurs est renforcée par des outils en self-service (CLI maison) pour gérer l'infrastructure et diagnostiquer les incidents sans goulot d'étranglement. Une culture d'observabilité intégrée dès la conception permet de détecter et de réagir rapidement aux anomalies en production. Accepter l'échec comme inévitable permet de concevoir des systèmes plus résilients capables de se rétablir automatiquement. "Vibe Coding" vs "Prompt Engineering" : l'IA et le futur du développement logiciel https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/ L'IA est passée du statut d'expérimentation à celui d'infrastructure essentielle pour le développement de logiciels en 2025. L'IA ne remplace pas les ingénieurs, mais agit comme un amplificateur de leurs compétences, de leur jugement et de la qualité de leur réflexion. Distinction entre le "Vibe Coding" (rapide, intuitif, idéal pour les prototypes) et le "Prompt Engineering" (délibéré, contraint, nécessaire pour les systèmes maintenables). L'importance cruciale du contexte ("Context Engineering") : l'IA devient réellement puissante lorsqu'elle est connectée aux systèmes réels (GitHub, Jira, etc.) via des protocoles comme le MCP. Utilisation d'agents spécialisés (écriture de RFC, revue de code, architecture) plutôt que de modèles génériques pour obtenir de meilleurs résultats. Émergence de l'ingénieur "Technical Product Manager" capable d'abattre seul le travail d'une petite équipe grâce à l'IA, à condition de maîtriser les fondamentaux techniques. Le risque majeur : l'IA permet d'aller très vite dans la mauvaise direction si le jugement humain et l'expérience font défaut. Le niveau d'exigence global augmente : les bases techniques solides deviennent plus importantes que jamais pour éviter l'accumulation de dette technique rapide. Une revue de code en solo (Kent Beck) ! https://tidyfirst.substack.com/p/party-of-one-for-code-review?r=64ov3&utm_campaign=post&utm_medium=web&triedRedirect=true La revue de code traditionnelle, héritée des inspections formelles d'IBM, s'essouffle car elle est devenue trop lente et asynchrone par rapport au rythme du développement moderne. Avec l'arrivée de l'IA ("le génie"), la vitesse de production du code dépasse la capacité de relecture humaine, créant un goulot d'étranglement majeur. La revue de code doit évoluer vers deux nouveaux objectifs prioritaires : un "sanity check" pour vérifier que l'IA a bien fait ce qu'on lui demandait, et le contrôle de la dérive structurelle de la base de code. Maintenir une structure saine est crucial non seulement pour les futurs développeurs humains, mais aussi pour que l'IA puisse continuer à comprendre et modifier le code efficacement sans perdre le contexte. Kent Beck expérimente des outils automatisés (comme CodeRabbit) pour obtenir des résumés et des schémas d'architecture afin de garder une conscience globale des changements rapides. Même si les outils automatisés sont utiles, le "Pair Programming" reste irremplaçable pour la richesse des échanges et la pression sociale bénéfique qu'il impose à la réflexion. La revue de code solo n'est pas une fin en soi, mais une adaptation nécessaire lorsque l'on travaille seul avec des outils de génération de code augmentés. Loi, société et organisation Lego lance les Lego Smart Play, avec des Brique, des Smart Tags et des Smart Figurines pour faire de nouvelles constructions interactives avec des Legos https://www.lego.com/fr-fr/smart-play LEGO SMART Play : technologie réactive au jeu des enfants. Trois éléments clés : SMART Brique : Brique LEGO 2x4 "cerveau". Accéléromètre, lumières réactives, détecteur de couleurs, synthétiseur sonore. Réagit aux mouvements (tenir, tourner, taper). SMART Tags : Petites pièces intelligentes. Indiquent à la SMART Brique son rôle (ex: hélicoptère, voiture) et les sons à produire. Activent sons, mini-jeux, missions secrètes. SMART Minifigurines : Activées près d'une SMART Brique. Révèlent des personnalités uniques (sons, humeurs, réactions) via la SMART Brique. Encouragent l'imagination. Fonctionnement : SMART Brique détecte SMART Tags et SMART Minifigurines. Réagit aux mouvements avec lumières et sons dynamiques. Compatibilité : S'assemble avec les briques LEGO classiques. Objectif : Créer des expériences de jeu interactives, uniques et illimitées. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 5 février 2026 : Web Days Convention - Aix-en-Provence (France) 12 février 2026 : Strasbourg Craft #1 - Strasbourg (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 9-10 avril 2026 : AndroidMakers by droidcon - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 24-25 avril 2026 : Faiseuses du Web 5 - Dinan (France) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Microsoft Patch Tuesday January 2026 Microsoft released patches for 113 vulnerabilities. This includes one already exploited vulnerability, one that was made public before today and eight critical vulnerabilities. https://isc.sans.edu/diary/January%202026%20Microsoft%20Patch%20Tuesday%20Summary/32624 Adobe Patches Adobe released patches for five products. The code execution vulnerabilities in ColdFusion and Acrobat Reader deserve special attention. https://helpx.adobe.com/security.html Fortinet Patches Fortnet patched two products today, one suffering from an SSRF vulnerability. https://fortiguard.fortinet.com/psirt/FG-IR-25-783 https://fortiguard.fortinet.com/psirt/FG-IR-25-084 ConsentFix: Analysing a browser-native ClickFix-style attack that hijacks OAuth consent grants Attackers are tricking victims to copy/paste OAUTH URLs, including credentials, to a fake CAPTCHA https://pushsecurity.com/blog/consentfix
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
n8n supply chain attack Malicious npm pagackages were used to attempt to obtain user OAUTH credentials for NPM. https://www.endorlabs.com/learn/n8mare-on-auth-street-supply-chain-attack-targets-n8n-ecosystem Gogs 0-Day Exploited in the Wild An at the time unpachted flaw in Gogs was exploited to compromise git repos. https://www.wiz.io/blog/wiz-research-gogs-cve-2025-8110-rce-exploit Telegram Proxy Link Abuse Telegram proxy links have been abused to deanonymize users https://x.com/GangExposed_RU/status/2009961417781457129
Instagram denies breach post-data leak Sweden detains consultant suspected of spying n8n supply chain attack steals OAuth tokens Thanks to our episode sponsor, ThreatLocker Want real Zero Trust training? Zero Trust World 2026 delivers hands-on labs and workshops that show CISOs exactly how to implement and maintain Zero Trust in real environments. Join us March 4–6 in Orlando, plus a live CISO Series episode on March 6. Get $200 off with ZTWCISO26 at ztw.com.
ConsentFix: Analysing a browser-native ClickFix-style attack that hijacks OAuth consent grants
Parce que… c'est l'épisode 0x692! Shameless plug 25 et 26 février 2026 - SéQCure 2026 CfP 31 mars au 2 avril 2026 - Forum INCYBER - Europe 2026 14 au 17 avril 2026 - Botconf 2026 28 et 29 avril 2026 - Cybereco Cyberconférence 2026 9 au 17 mai 2026 - NorthSec 2026 3 au 5 juin 2026 - SSTIC 2026 19 septembre 2026 - Bsides Montréal Description Introduction Ce deuxième épisode du podcast technique avec Charles F. Hamilton explore en profondeur les techniques d'évasion des solutions EDR (Endpoint Detection and Response) et les stratégies que les red teamers peuvent utiliser pour contourner ces systèmes de détection. La discussion révèle que malgré les avancées technologiques, les EDR restent vulnérables à des techniques relativement simples lorsqu'on comprend leurs mécanismes de détection. Les limites de la détection EDR Corrélation réseau et named pipes Un exemple concret illustre les faiblesses des EDR modernes : un exécutable malveillant qui communique avec internet tout en effectuant de la reconnaissance sur le réseau interne. Les EDR “top tier” détectent généralement cette activité anormale grâce au machine learning, identifiant qu'un processus communique simultanément vers l'extérieur et vers le réseau local via SMB, Kerberos ou d'autres protocoles. La solution de contournement est élégante : utiliser les named pipes de Windows. Cette fonctionnalité native permet la communication inter-processus. En séparant les tâches entre deux processus indépendants - l'un gérant les communications externes, l'autre la reconnaissance interne - et en les faisant communiquer via named pipes, on brise complètement la chaîne de détection du machine learning. Cette technique, enseignée depuis 8 ans dans les formations red team, demeure efficace. Des signatures déguisées Paradoxalement, malgré leurs prétentions, les EDR fonctionnent encore largement sur des principes de signatures. La différence avec les antivirus traditionnels réside davantage dans où ils appliquent cette détection - non seulement sur le disque, mais aussi en mémoire et au niveau comportemental. Le compromis entre faux positifs et détection reste délicat : générer 1500 alertes par jour conduirait à l'“alert fatigue” et rendrait le système inutile. Techniques d'obfuscation et d'évasion La randomisation intelligente Pour éviter la détection statique, l'obfuscation doit être réfléchie. Un piège courant : générer des variables aléatoires de longueur fixe (par exemple, toujours 16 caractères). Les règles Yara peuvent détecter ce pattern. La solution consiste à introduire de la randomness dans le random : utiliser des longueurs variables (entre 6 et 22 caractères) et concaténer plusieurs mots du dictionnaire plutôt que des chaînes purement aléatoires. Nettoyage de la mémoire L'obfuscation ne s'arrête pas à l'exécution. Même après déchiffrement en mémoire, des artefacts subsistent. Par exemple, Cobalt Strike laisse des patterns reconnaissables dans les premiers bytes du shellcode. La stratégie recommandée utilise plusieurs threads d'exécution : un pour déchiffrer et lancer le shellcode, un autre pour nettoyer la mémoire des variables intermédiaires. Bien que les EDR ne scannent pas la mémoire en continu (ce serait trop coûteux en performance), ces artefacts restent détectables. Protection au niveau kernel Protected Process Light (PPL) Microsoft a introduit les PPL pour protéger les processus critiques comme LSASS. Même avec des privilèges système, un attaquant ne peut accéder à ces processus. Le problème : le kernel reste le point de confiance ultime. Une fois qu'un attaquant obtient l'exécution de code au niveau kernel - via des drivers vulnérables par exemple - toutes les protections PPL tombent. Techniques d'anti-tampering La technique “EDR Freeze” illustre cette réalité : en utilisant ProcDump (un outil Windows légitime), on peut créer un dump mémoire d'un processus EDR, ce qui le met en pause. En arrêtant ensuite ProcDump avant qu'il ne termine, le processus EDR reste indéfiniment en pause, sans générer d'alerte de tampering puisqu'il n'a pas été modifié. Cloud et nouvelles vulnérabilités Le passage au cloud déplace simplement les problèmes. Les attaques traditionnelles visaient le “domain admin” en local ; aujourd'hui, avec l'authentification multifacteur, les attaquants utilisent le device code phishing ou des applications tierces malveillantes pour obtenir des tokens OAuth valides. Une fois ces tokens obtenus, l'escalade vers “global admin” devient possible. La difficulté : aucun EDR ne peut surveiller ces attaques puisqu'elles se déroulent depuis la machine de l'attaquant. La seule visibilité provient de ce que Microsoft accepte de partager, souvent derrière des paywalls supplémentaires. Les entreprises ont passé 20 ans à maîtriser Active Directory et les outils de sécurité on-premise, mais repartent de zéro dans le cloud avec des outils immatures. Recommandations défensives Configurations simples mais efficaces Plusieurs mesures basiques restent sous-utilisées : Bloquer PowerShell pour les utilisateurs non techniques Désactiver la fonction Run (Windows+R) pour 99% des utilisateurs Supprimer MSHTA.exe via GPO (aucun besoin légitime des fichiers HTA) Restreindre les scripts Office par défaut Ces mesures élimineraient la majorité des attaques “commodity malware” qui fonctionnent uniquement parce que les entreprises n'ont pas fermé ces vecteurs d'accès basiques. Le facteur humain irremplaçable Les EDR excellent contre le malware de masse mais peinent face aux attaques ciblées. L'IA et les agents ne remplaceront pas les analystes humains capables de : Faire du threat hunting actif Contextualiser les alertes (pourquoi un utilisateur non technique lancerait-il PowerShell ?) Détecter les anomalies dans le trafic réseau (nouveaux domaines, patterns de requêtes POST répétitives) Raconter l'histoire complète d'une intrusion en corrélant les événements Détection réseau Les NDR/XDR commencent à combler cette lacune, mais restent embryonnaires. La détection réseau devrait identifier : Les nouveaux domaines jamais vus auparavant Les patterns de communication C2 (requêtes POST régulières avec jitter) Les anomalies d'authentification Le trafic inhabituel pour un profil utilisateur donné Conclusion La sophistication des attaquants reste limitée car ils n'en ont pas encore besoin - trop d'environnements demeurent mal configurés. Les entreprises investissent massivement dans les EDR mais négligent les configurations de base et le facteur humain. L'histoire se répète avec le cloud et l'IA : plutôt que de résoudre les problèmes fondamentaux, on déplace la responsabilité vers de nouveaux outils. La vraie sécurité nécessite une compréhension technique approfondie, des configurations rigoureuses, et surtout, des analystes compétents pour interpréter les signaux et raconter l'histoire des incidents. Collaborateurs Nicolas-Loïc Fortin Charles F. Hamilton Crédits Montage par Intrasecure inc Locaux virtuels par Riverside.fm
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of Open Tech Talks, host Kashif Manzoor sits down with Bobbie Chen, a product manager working at the intersection of fraud prevention, cybersecurity, and AI agent identification in Silicon Valley. As generative AI and large language models rapidly move from experimentation into real products, organizations are discovering a new reality. The same tools that make building software easier also make abuse, fraud, and attacks easier. Vibe coding, AI agents, and LLM-powered workflows are accelerating innovation, but they are also lowering the barrier for bad actors. This conversation breaks down why security, identity, and access control matter more than ever in the age of LLMs, especially as AI systems begin to touch authentication, customer data, financial workflows, and enterprise knowledge. Bobbie shares practical insights from real-world security and fraud scenarios, explaining why many AI risks are not entirely new but become more dangerous when speed, automation, and scale increase. The episode explores how organizations can adopt AI responsibly without bypassing decades of hard-earned security lessons. From bot abuse and credit farming to identity-aware AI systems and OAuth-based access control, this discussion helps listeners understand where AI changes the threat model and where it doesn't. This is not a hype-driven episode. It is a grounded, experience-backed conversation for professionals who want to build, deploy, and scale AI systems without creating invisible security debt. Episode # 177 Today's Guest: Bobbie Chen, Product Manager, Fraud and Security at Stytch Bobbie is a product manager at Stytch, where he helps organizations like Calendly and Replit fight against fraud and abuse. LinkedIn: Bobbie Chen What Listeners Will Learn: How LLMs and AI agents change the economics of fraud and abuse, making attacks cheaper, faster, and more customized Why vibe coding is powerful for experimentation, but risky when used without security review in production systems The difference between exploring AI ideas and asking users to trust you with sensitive data Standard security blind spots in AI-powered apps, especially around authentication, parsing, and edge cases Why organizations should not give AI systems blanket access to enterprise data How identity-aware AI systems using OAuth and scoped access reduce risk in RAG and enterprise search Why are many AI security failures process and organizational problems, not tooling problems How fraud patterns like AI credit farming and automated abuse are emerging at scale Why security teams must shift from being gatekeepers to continuous partners in AI adoption How professionals in security, product, and engineering can stay current as AI threats evolve Resources: Bobbie Chen The two blogs I mentioned: Simon Willison: https://simonwillison.net Drew Breunig: https://www.dbreunig.com
One year ago, Anthropic launched the Model Context Protocol (MCP)—a simple, open standard to connect AI applications to the data and tools they need. Today, MCP has exploded from a local-only experiment into the de facto protocol for agentic systems, adopted by OpenAI, Microsoft, Google, Block, and hundreds of enterprises building internal agents at scale. And now, MCP is joining the newly formed Agentic AI Foundation (AAIF) under the Linux Foundation, alongside Block's Goose coding agent, with founding members spanning the biggest names in AI and cloud infrastructure.We sat down with David Soria Parra (MCP lead, Anthropic), Nick Cooper (OpenAI), Brad Howes (Block / Goose), and Jim Zemlin (Linux Foundation CEO) to dig into the one-year journey of MCP—from Thanksgiving hacking sessions and the first remote authentication spec to long-running tasks, MCP Apps, and the rise of agent-to-agent communication—and the behind-the-scenes story of how three competitive AI labs came together to donate their protocols and agents to a neutral foundation, why enterprises are deploying MCP servers faster than anyone expected (most of it invisible, internal, and at massive scale), what it takes to design a protocol that works for both simple tool calls and complex multi-agent orchestration, how the foundation will balance taste-making (curating meaningful projects) with openness (avoiding vendor lock-in), and the 2025 vision: MCP as the communication layer for asynchronous, long-running agents that work while you sleep, discover and install their own tools, and unlock the next order of magnitude in AI productivity.We discuss:* The one-year MCP journey: from local stdio servers to remote HTTP streaming, OAuth 2.1 authentication (and the enterprise lessons learned), long-running tasks, and MCP Apps (iframes for richer UI)* Why MCP adoption is exploding internally at enterprises: invisible, internal servers connecting agents to Slack, Linear, proprietary data, and compliance-heavy workflows (financial services, healthcare)* The authentication evolution: separating resource servers from identity providers, dynamic client registration, and why the March spec wasn't enterprise-ready (and how June fixed it)* How Anthropic dogfoods MCP: internal gateway, custom servers for Slack summaries and employee surveys, and why MCP was born from “how do I scale dev tooling faster than the company grows?”* Tasks: the new primitive for long-running, asynchronous agent operations—why tools aren't enough, how tasks enable deep research and agent-to-agent handoffs, and the design choice to make tasks a “container” (not just async tools)* MCP Apps: why iframes, how to handle styles and branding, seat selection and shopping UIs as the killer use case, and the collaboration with OpenAI to build a common standard* The registry problem: official registry vs. curated sub-registries (Smithery, GitHub), trust levels, model-driven discovery, and why MCP needs “npm for agents” (but with signatures and HIPAA/financial compliance)* The founding story of AAIF: how Anthropic, OpenAI, and Block came together (spoiler: they didn't know each other were talking to Linux Foundation), why neutrality matters, and how Jim Zemlin has never seen this much day-one inbound interest in 22 years—David Soria Parra (Anthropic / MCP)* MCP: https://modelcontextprotocol.io* https://uk.linkedin.com/in/david-soria-parra-4a78b3a* https://x.com/dsp_Nick Cooper (OpenAI)* X: https://x.com/nicoaicoprBrad Howes (Block / Goose)* Goose: https://github.com/block/gooseJim Zemlin (Linux Foundation)* LinkedIn: https://www.linkedin.com/in/zemlin/Agentic AI Foundation* https://agenticai.foundationFull Video EpisodeTimestamps00:00:00 Introduction: MCP's First Year and Foundation Launch00:01:17 MCP's Journey: From Launch to Industry Standard00:02:06 Protocol Evolution: Remote Servers and Authentication00:08:52 Enterprise Authentication and Financial Services00:11:42 Transport Layer Challenges: HTTP Streaming and Scalability00:15:37 Standards Development: Collaboration with Tech Giants00:34:27 Long-Running Tasks: The Future of Async Agents00:30:41 Discovery and Registries: Building the MCP Ecosystem00:30:54 MCP Apps and UI: Beyond Text Interfaces00:26:55 Internal Adoption: How Anthropic Uses MCP00:23:15 Skills vs MCP: Complementary Not Competing00:36:16 Community Events and Enterprise Learnings01:03:31 Foundation Formation: Why Now and Why Together01:07:38 Linux Foundation Partnership: Structure and Governance01:11:13 Goose as Reference Implementation01:17:28 Principles Over Roadmaps: Composability and Quality01:21:02 Foundation Value Proposition: Why Contribute01:27:49 Practical Investments: Events, Tools, and Community01:34:58 Looking Ahead: Async Agents and Real Impact Get full access to Latent.Space at www.latent.space/subscribe
One year ago, Anthropic launched the Model Context Protocol (MCP)—a simple, open standard to connect AI applications to the data and tools they need. Today, MCP has exploded from a local-only experiment into the de facto protocol for agentic systems, adopted by OpenAI, Microsoft, Google, Block, and hundreds of enterprises building internal agents at scale. And now, MCP is joining the newly formed Agentic AI Foundation (AAIF) under the Linux Foundation, alongside Block's Goose coding agent, with founding members spanning the biggest names in AI and cloud infrastructure. We sat down with David Soria Parra (MCP lead, Anthropic), Nick Cooper (OpenAI), Brad Howes (Block / Goose), and Jim Zemlin (Linux Foundation CEO) to dig into the one-year journey of MCP—from Thanksgiving hacking sessions and the first remote authentication spec to long-running tasks, MCP Apps, and the rise of agent-to-agent communication—and the behind-the-scenes story of how three competitive AI labs came together to donate their protocols and agents to a neutral foundation, why enterprises are deploying MCP servers faster than anyone expected (most of it invisible, internal, and at massive scale), what it takes to design a protocol that works for both simple tool calls and complex multi-agent orchestration, how the foundation will balance taste-making (curating meaningful projects) with openness (avoiding vendor lock-in), and the 2025 vision: MCP as the communication layer for asynchronous, long-running agents that work while you sleep, discover and install their own tools, and unlock the next order of magnitude in AI productivity. We discuss: The one-year MCP journey: from local stdio servers to remote HTTP streaming, OAuth 2.1 authentication (and the enterprise lessons learned), long-running tasks, and MCP Apps (iframes for richer UI) Why MCP adoption is exploding internally at enterprises: invisible, internal servers connecting agents to Slack, Linear, proprietary data, and compliance-heavy workflows (financial services, healthcare) The authentication evolution: separating resource servers from identity providers, dynamic client registration, and why the March spec wasn't enterprise-ready (and how June fixed it) How Anthropic dogfoods MCP: internal gateway, custom servers for Slack summaries and employee surveys, and why MCP was born from "how do I scale dev tooling faster than the company grows?" Tasks: the new primitive for long-running, asynchronous agent operations—why tools aren't enough, how tasks enable deep research and agent-to-agent handoffs, and the design choice to make tasks a "container" (not just async tools) MCP Apps: why iframes, how to handle styles and branding, seat selection and shopping UIs as the killer use case, and the collaboration with OpenAI to build a common standard The registry problem: official registry vs. curated sub-registries (Smithery, GitHub), trust levels, model-driven discovery, and why MCP needs "npm for agents" (but with signatures and HIPAA/financial compliance) The founding story of AAIF: how Anthropic, OpenAI, and Block came together (spoiler: they didn't know each other were talking to Linux Foundation), why neutrality matters, and how Jim Zemlin has never seen this much day-one inbound interest in 22 years — David Soria Parra (Anthropic / MCP) MCP: https://modelcontextprotocol.io https://uk.linkedin.com/in/david-soria-parra-4a78b3a https://x.com/dsp_ Nick Cooper (OpenAI) X: https://x.com/nicoaicopr Brad Howes (Block / Goose) Goose: https://github.com/block/goose Jim Zemlin (Linux Foundation) LinkedIn: https://www.linkedin.com/in/zemlin/ Agentic AI Foundation https://agenticai.foundation Chapters 00:00:00 Introduction: MCP's First Year and Foundation Launch 00:01:17 MCP's Journey: From Launch to Industry Standard 00:02:06 Protocol Evolution: Remote Servers and Authentication 00:08:52 Enterprise Authentication and Financial Services 00:11:42 Transport Layer Challenges: HTTP Streaming and Scalability 00:15:37 Standards Development: Collaboration with Tech Giants 00:34:27 Long-Running Tasks: The Future of Async Agents 00:30:41 Discovery and Registries: Building the MCP Ecosystem 00:30:54 MCP Apps and UI: Beyond Text Interfaces 00:26:55 Internal Adoption: How Anthropic Uses MCP 00:23:15 Skills vs MCP: Complementary Not Competing 00:36:16 Community Events and Enterprise Learnings 01:03:31 Foundation Formation: Why Now and Why Together 01:07:38 Linux Foundation Partnership: Structure and Governance 01:11:13 Goose as Reference Implementation 01:17:28 Principles Over Roadmaps: Composability and Quality 01:21:02 Foundation Value Proposition: Why Contribute 01:27:49 Practical Investments: Events, Tools, and Community 01:34:58 Looking Ahead: Async Agents and Real Impact
In episode 307 of Absolute AppSec, hosts Ken and Seth conduct a retrospective on the application security landscape of 2025. They conclude that their previous predictions were largely accurate, particularly regarding the rise of prompt injection, AI-backed attacks, and the industry-wide shift toward per-token billing models. A major theme of the year was the solidification of supply chain security as a critical pillar of AppSec, driven by notable incidents such as Shai Hulud and React for Shell. The hosts also share insights from their four-day training course on utilizing LLMs for secure code review, noting that while AI development is becoming more prevalent, most practitioners are still in the nascent stages of building custom tooling. Much of the discussion focuses on the Model Context Protocol (MCP); while it offers significant value for agentic workflows, the hosts criticize its current lack of robust security controls, specifically highlighting issues with OAuth implementations and short timeouts in existing clients. Finally, they discuss how the industry is moving toward a more nuanced balance between deterministic tools like Semgrep and the probabilistic creativity of LLMs to increase efficiency in security consulting.
In this annual Security Squawk tradition, we do two things most people avoid: accountability and predictions. First, we break down the top cyber-attacks of 2025 and translate them into what actually matters for business owners, IT pros, and MSPs. Then we grade our predictions from last year using real outcomes. No excuses. No hand waving. No “well technically.” Why does this episode matter? Because 2025 made one thing painfully clear. Most cyber damage does not come from genius hackers. It comes from predictable failures. Unpatched systems. Over-trusted third parties. Tokens and sessions that live too long. Help desks that can be socially engineered. And organizations that still treat cybersecurity like an IT issue instead of a business survival issue. We start with the Top 10 Cyber-Attacks of 2025 and pull out the patterns hiding behind the headlines. This year's list includes ransomware and extortion campaigns, software supply chain failures, identity and OAuth token abuse, and attacks that caused real operational disruption, not just data exposure. These stories show how attackers scale impact by targeting widely deployed platforms and trusted business tools, then turning that access into downtime, data theft, and brand damage. One of the biggest lessons of 2025 is simple: identity is the new perimeter. Many of the most important incidents were not break-in stories. They were log-in stories. Stolen sessions and OAuth tokens keep working because they let attackers bypass MFA, move quickly, and blend in as legitimate users. If your security strategy is focused only on blocking failed logins, you are watching the wrong signal. 2025 also reinforced how fragile third-party trust has become. Integrations are everywhere. They make businesses faster and more efficient, but they also expand the blast radius. When a third-party tool or service account is compromised, it can become a shortcut into systems that were never directly attacked. In this episode, we talk about practical steps like minimizing access scopes, eliminating unnecessary integrations, shortening token lifetimes, and having a real plan to revoke access when something looks off. We also dig into why on-prem enterprise tools continue to get hammered. Many organizations still run internet-facing platforms that are patched slowly and monitored poorly. Attackers love that combination. In 2025, we saw repeated exploitation of high-value enterprise software where a single weakness led to widespread compromise across industries. If your patching strategy is “we will get to it,” attackers already have. Another major theme this year was operational disruption. Some of the costliest incidents were not just about stolen data. They shut down production, halted sales, broke customer service systems, and created ripple effects across supply chains. That is where executives feel cyber risk the hardest. Data loss hurts. Downtime is a business emergency. Then we grade last year's predictions. Did AI take our jobs? Not even close. What it did do was raise the baseline for both attackers and defenders. AI improved phishing quality, accelerated scams, and forced organizations to confront the risks of adopting new tools without clear controls. We also review our call on token and session-based attacks. That prediction aged well. Identity-layer abuse dominated 2025. The issue was not a lack of MFA. The issue was that attackers did not need to defeat MFA if they could steal what comes after it. We also revisit regulation. It did not arrive all at once. It crept forward. Agencies and lawmakers continued tightening expectations, especially in sectors that keep getting hit. Businesses that wait for mandates before improving controls will pay more later, either through recovery costs, insurance pressure, or lost trust. Finally, we look ahead to 2026 with new predictions that are probable, not obvious. We discuss what is likely to change around identity, help desk security, SaaS governance, and how leaders measure cyber readiness. The short version is this: 2026 will reward companies that treat access as a living system and punish those that treat it like a one-time setup. If you like the show, help us grow it. Subscribe, leave a review, and share this episode with someone who still thinks cybersecurity is just antivirus and a firewall. And if you want to support the podcast directly, buy me a coffee at buymeacoffee.com/securitysquawk.
In the final show of 2025, Patrick Gray and Adam Boileau discuss the week's cybersecurity news, including: React2Shell attacks continue, surprising no one The unholy combination of OAuth consent phishing, social engineering and Azure CLI Venezuela's state oil firm gets ransomware'd, blames US… but what if it really is a US cyber op?! Russian junk-hacktivist gets indicted for cybering critical… err… a car wash and a fountain Microsoft finally turns RC4 off by default in Active Directory Kerberos Traefik's TLS verify=on … turns it off, whoopsie
In this episode of Cybersecurity Today, host Jim Love discusses a range of pressing cybersecurity threats. The show covers the escalating React2Shell vulnerability, which has led to widespread automated exploitation campaigns involving crypto miners and back doors. Additionally, Jim reports on the Black Force phishing kit, which bypasses multifactor authentication and is gaining traction among cybercriminals. Microsoft OAuth consent attacks are also highlighted, with users being tricked into granting access to their accounts. Finally, the episode touches on PornHub's data breach involving the Shiny Hunters cybercrime group and the importance of patching vulnerabilities and being cautious during the holiday season. 00:00 Introduction and Sponsor Message 00:22 React2Shell Vulnerability Deep Dive 03:46 Black Force Phishing Toolkit 05:44 Microsoft OAuth Consent Phishing 07:29 PornHub Data Breach by Shiny Hunters 10:21 Holiday Cybersecurity Tips and Final Thoughts
Interview Segment: Tony Kelly Illuminating Data Blind Spots As data sprawls across clouds and collaboration tools, shadow data and fragmented controls have become some of the biggest blind spots in enterprise security. In this segment, we'll unpack how Data Security Posture Management (DSPM) helps organizations regain visibility and control over their most sensitive assets. Our guest will break down how DSPM differs from adjacent technologies like DLP, CSPM, and DSP, and how it integrates into broader Zero Trust and cloud security strategies. We'll also explore how compliance and regulatory pressures are shaping the next evolution of the DSPM market—and what security leaders should be doing now to prepare. Segment Resources: https://static.fortra.com/corporate/pdfs/brochure/fta-corp-fortra-dspm-br.pdf This segment is sponsored by Fortra. Visit https://securityweekly.com/fortra to learn more about them! Topic Segment: We've got passkeys, now what? Over this year on this podcast, we've talked a lot about infostealers. Passkeys are a clear solution to implementing phishing and theft-resistant authentication, but what about all these infostealers stealing OAuth keys and refresh tokens? As long as session hijacking is as simple as moving a cookie from one machine to another, securing authentication seems like solving only half the problem. Locking the front door, but leaving a side door unlocked. After doing some research, it appears that there has been some work on this front, including a few standards that have been introduced: DBSC (Device Bound Session Credentials) for browsers DPoP (Demonstrating Proof of Possession) for OAuth applications We'll address a few key questions in this segment: 1. how do these new standards help stop token theft? 2. how broadly have they been adopted? Segment Resources: FIDO Alliance White Paper: DBSC/DPOP as Complementary Technologies to FIDO Authentication News Segment Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-437
Interview Segment: Tony Kelly Illuminating Data Blind Spots As data sprawls across clouds and collaboration tools, shadow data and fragmented controls have become some of the biggest blind spots in enterprise security. In this segment, we'll unpack how Data Security Posture Management (DSPM) helps organizations regain visibility and control over their most sensitive assets. Our guest will break down how DSPM differs from adjacent technologies like DLP, CSPM, and DSP, and how it integrates into broader Zero Trust and cloud security strategies. We'll also explore how compliance and regulatory pressures are shaping the next evolution of the DSPM market—and what security leaders should be doing now to prepare. Segment Resources: https://static.fortra.com/corporate/pdfs/brochure/fta-corp-fortra-dspm-br.pdf This segment is sponsored by Fortra. Visit https://securityweekly.com/fortra to learn more about them! Topic Segment: We've got passkeys, now what? Over this year on this podcast, we've talked a lot about infostealers. Passkeys are a clear solution to implementing phishing and theft-resistant authentication, but what about all these infostealers stealing OAuth keys and refresh tokens? As long as session hijacking is as simple as moving a cookie from one machine to another, securing authentication seems like solving only half the problem. Locking the front door, but leaving a side door unlocked. After doing some research, it appears that there has been some work on this front, including a few standards that have been introduced: DBSC (Device Bound Session Credentials) for browsers DPoP (Demonstrating Proof of Possession) for OAuth applications We'll address a few key questions in this segment: 1. how do these new standards help stop token theft? 2. how broadly have they been adopted? Segment Resources: FIDO Alliance White Paper: DBSC/DPOP as Complementary Technologies to FIDO Authentication News Segment Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-437
Interview Segment: Tony Kelly Illuminating Data Blind Spots As data sprawls across clouds and collaboration tools, shadow data and fragmented controls have become some of the biggest blind spots in enterprise security. In this segment, we'll unpack how Data Security Posture Management (DSPM) helps organizations regain visibility and control over their most sensitive assets. Our guest will break down how DSPM differs from adjacent technologies like DLP, CSPM, and DSP, and how it integrates into broader Zero Trust and cloud security strategies. We'll also explore how compliance and regulatory pressures are shaping the next evolution of the DSPM market—and what security leaders should be doing now to prepare. Segment Resources: https://static.fortra.com/corporate/pdfs/brochure/fta-corp-fortra-dspm-br.pdf This segment is sponsored by Fortra. Visit https://securityweekly.com/fortra to learn more about them! Topic Segment: We've got passkeys, now what? Over this year on this podcast, we've talked a lot about infostealers. Passkeys are a clear solution to implementing phishing and theft-resistant authentication, but what about all these infostealers stealing OAuth keys and refresh tokens? As long as session hijacking is as simple as moving a cookie from one machine to another, securing authentication seems like solving only half the problem. Locking the front door, but leaving a side door unlocked. After doing some research, it appears that there has been some work on this front, including a few standards that have been introduced: DBSC (Device Bound Session Credentials) for browsers DPoP (Demonstrating Proof of Possession) for OAuth applications We'll address a few key questions in this segment: 1. how do these new standards help stop token theft? 2. how broadly have they been adopted? Segment Resources: FIDO Alliance White Paper: DBSC/DPOP as Complementary Technologies to FIDO Authentication News Segment Show Notes: https://securityweekly.com/esw-437
Interview Segment: Tony Kelly Illuminating Data Blind Spots As data sprawls across clouds and collaboration tools, shadow data and fragmented controls have become some of the biggest blind spots in enterprise security. In this segment, we'll unpack how Data Security Posture Management (DSPM) helps organizations regain visibility and control over their most sensitive assets. Our guest will break down how DSPM differs from adjacent technologies like DLP, CSPM, and DSP, and how it integrates into broader Zero Trust and cloud security strategies. We'll also explore how compliance and regulatory pressures are shaping the next evolution of the DSPM market—and what security leaders should be doing now to prepare. Segment Resources: https://static.fortra.com/corporate/pdfs/brochure/fta-corp-fortra-dspm-br.pdf This segment is sponsored by Fortra. Visit https://securityweekly.com/fortra to learn more about them! Topic Segment: We've got passkeys, now what? Over this year on this podcast, we've talked a lot about infostealers. Passkeys are a clear solution to implementing phishing and theft-resistant authentication, but what about all these infostealers stealing OAuth keys and refresh tokens? As long as session hijacking is as simple as moving a cookie from one machine to another, securing authentication seems like solving only half the problem. Locking the front door, but leaving a side door unlocked. After doing some research, it appears that there has been some work on this front, including a few standards that have been introduced: DBSC (Device Bound Session Credentials) for browsers DPoP (Demonstrating Proof of Possession) for OAuth applications We'll address a few key questions in this segment: 1. how do these new standards help stop token theft? 2. how broadly have they been adopted? Segment Resources: FIDO Alliance White Paper: DBSC/DPOP as Complementary Technologies to FIDO Authentication News Segment Show Notes: https://securityweekly.com/esw-437
In this sponsored interview Casey Ellis is joined by Push Security's Field CTO, Mark Orlando. They chat about the ways that browser-based attacks are evolving and how Push Security is finding and cataloging them. Show notes ConsentFix: Analysing a browser-native ClickFix-style attack that hijacks OAuth consent grants Introducing our guide to phishing detection evasion techniques
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-360
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Show Notes: https://securityweekly.com/asw-360
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-360
The MCP standard gave rise to dreams of interconnected agents and nightmares of what those interconnected agents would do with unfettered access to APIs, data, and local systems. Aaron Parecki explains how OAuth's new Client ID Metadata Documents spec provides more security for MCPs and the reasons why the behavior and design of MCPs required a new spec like this. Segment resources: https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update https://www.ietf.org/archive/id/draft-ietf-oauth-client-id-metadata-document-00.html https://oauth.net/cross-app-access/ https://oauth.net/2/oauth-best-practice/ Show Notes: https://securityweekly.com/asw-360
In this episode of Cybersecurity Today, host Jim Love discusses several major cybersecurity events. CloudFlare faced significant outages affecting major platforms like Amazon and YouTube, along with continued issues for Microsoft 365 users. NordVPN warned of a surge in fake shopping websites as Black Friday approaches, with phishing attempts climbing 36% between August and October. An AI transcription tool caused a privacy breach at an Ontario hospital, leading to a privacy probe. Finally, Salesforce is investigating a data theft wave linked to Gainsight, illustrating the risks of OAuth token misuse. The episode is supported by Meter, a network infrastructure provider. 00:00 Introduction and Sponsor Message 00:44 CloudFlare Outages and Their Impact 02:34 Surge in Fake Shopping Websites 04:56 AI Privacy Breach at Ontario Hospital 08:41 Salesforce Data Theft Investigation 11:26 Conclusion and Sponsor Message
Curious about OAuth, MCP servers, and building cool ChatGPT apps? Hear from Max of Stytch as we dive deep, break down the tech, and build a Tamagotchi together! Drop your thoughts and share if you enjoyed it.https://codingcat.dev/podcast/how-oauth-mcp-and-the-openai-apps-sdk-power-the-next-generation-of-interactive-ai-experiences00:00 Meet Stytch & Max01:20 Consumer Identity07:04 Deep Dive OAuth09:37 MCP Explained17:59 Security Risks19:59 Next-Gen Apps24:37 Building Chatagotchi34:09 MCP Code Walkthrough51:52 Future Predictions53:02 Closing Thoughts
Microsoft WSUS vulnerability could allow for remote code execution Fake LastPass death claims used to breach password vaults New CoPhish attack steals OAuth tokens via Copilot Studio agents Huge thanks to our sponsor, Conveyor If security questionnaires make you feel like you're drowning in chaos, you're not alone. Endless spreadsheets, portals, and questions—always when you least expect them. Conveyor brings calm to the storm. With AI that auto-fills questionnaires and a trust center that shares all your docs in one place, you'll feel peace where there used to be panic. Find your security review zen at www.conveyor.com. Find the stories behind the headlines at CISOseries.com.
In this episode of Hashtag Trending, host Jim Love discusses the Canadian CIO of the Year Awards and recognizes several winners. Highlights include OpenAI entering the browser market with ChatGPT-integrated Atlas, posing a serious threat to Google Chrome's dominance. Security concerns with Atlas storing OAuth tokens are mentioned, urging caution while experimenting with new AI browsers. Additionally, the Glassworm malware hiding in Visual Studio Code extensions is detailed, highlighting the importance of auditing extensions. Finally, an AI model collaboration between Google and Yale University shows promising results in cancer treatment by making tumors more visible to the immune system. Tune in for these updates and more! 00:00 Shoutout to CIO Achievements 01:56 Introducing Hashtag Trending 02:02 OpenAI's New Browser: Atlas 04:14 Security Alert: Glass Worm in VS Code 06:37 AI Breakthrough in Cancer Treatment 08:25 Closing Remarks and How to Support Us
JD Fiscus (nerding.io) shares how a late-night hack connecting MCP to n8n exploded to ~1M downloads, then demos practical MCP workflows: indexing YouTube channels for Q&A, and auto-building n8n flows from natural language. We dig into the Agentic Commerce Protocol, real security pitfalls (like destructive commands), and how to turn MCPs into products with OAuth and Stripe for authentication and metered billing. He closes with how he teaches this hands-on at the Vibe Coding Retreat.Timestamps1:00 Why build it: “MCP shouldn't be Claude-only”—bridging MCP into n8n early (Dec/Jan)2:09 Shipping under the pseudonym nerding.io; surprise seeing creators use it2:25 n8n later ships its own MCP server/client; they nod to nerding.io & Simon3:59 “N8n is useful, but so much more useful with MCP”5:12 What MCP means for software: every smart company is exposing an MCP; new login/usage patterns6:27 Agentic Commerce Protocol (ACP): Stripe + OpenAI; agents checkout across the web8:02 Marketing to agents not humans? SEO shifts as agents comparison-shop9:10 Early “agent mode” attempts vs protocol-based purchases (less hacky)10:58 Likely adopters: platforms (Shopify) & big retailers; echoes of early MCP evolution14:11 Security realities: token passing evolved to OAuth; hallucination + destructive actions risk16:04 Personal mishap: agent ran supabase reset on a dev DB—imagine prod! Guardrails matter17:03 Designing MCP servers: don't just “wrap your API”; use resources/prompts for agentic UX19:04 Demo 1—Influencer MCP: index a YouTube channel, embed transcripts, ask questions in Claude20:54 Storage: embeddings into Postgres; per-channel tables24:46 Keeping it fresh: daily cron to ingest new videos25:18 Demo 2—Build n8n workflows from chat using N8N MCP (by Ramullet); live docs + API27:00 “Create a webhook → send leads to Sheets” built conversationally, with allow/deny prompts31:02 Zapier, Gumloop: agents that build automations via natural-language steps34:00 Next frontier: custom connectors (Claude/Cursor/OpenAI), OAuth auth flows for MCPs39:03 Turning MCPs into products: login with Twitter → Stripe subscription → metered billing41:12 Paid tool call demo: “paid echo” → Stripe usage event logged per user43:41 How to learn this fast: vibecodingretreat.com (small cohorts, hands-on builds)Tools & Technologies Mentioned (quick guide)MCP (Model Context Protocol) — Standard for connecting models to tools/data; supports tools, resources, prompts.n8n — Open-source automation platform; JD wrote an MCP node that went viral; also has native MCP server/client now.Claude / Cursor / OpenAI (custom connectors) — LLM IDEs/chats that can load MCPs; custom connectors enable OAuth + productized access.Agentic Commerce Protocol (ACP) — Early protocol (Stripe + OpenAI) for agent-initiated purchases with confirmations.Web MCP (W3C-oriented idea) — Emerging patterns for agent↔︎website interactions beyond human UI flows.OAuth — Secure, user-consented authentication for MCPs (vs passing raw tokens).Stripe (subscriptions + metered billing) — Attach billing/usage limits to MCP calls; track per-user consumption.YouTube API + Transcripts — Source data for the “Influencer MCP” indexing pipeline.Embeddings + Postgres — Store vectorized transcript chunks in Postgres for retrieval (JD self-hosts).Cron — Schedules daily ingestion of new content.Google Sheets — Target destination in demo for simple lead funnels.Zapier / Gumloop — Natural-language automation builders; early NLA/agent patterns.Git / CLI commands — Cautionary tale: agents running destructive commands (e.g., resets).Do Browser / Comet Browser — Agentic browsing tools referenced for web actions.Fellow.ai — AI meeting assistant with security-first design; generates precise summaries/action items.Subscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.
This show has been flagged as Explicit by the host. Part I - Lee talks about: Cyber - Capture the flag, providing OAuth, Secure design and static typing Databases - SQL Server, MySQL and SQLite Test Frameworks Generative AI for coding Hardware (as in IoT, not as in computers) Part II - A ramble about neurdivergence In academia and work Accommodation vs Encouraging work styles that fit the task Remote working Unusual career paths Technical communication Some personal code projects Url to Markdown Konsole extension Epub in a terminal Markdown table generator MySQL output formatter Resources of note Report on Changing the Workplace (2022) - about disability and remote working Model Context Protocol - A way to give AI chat bots access to software systems to increase their relevant knowledge and abilities Secure by Design book No chatbots were harmed in the making of this episode Provide feedback on this episode.
China-Linked Group Hits Governments With Stealth Malware Chinese hackers exploit VMware zero-day since October 2024 Apple's iOS fixes a bevy of glitches Huge thanks to our sponsor, Nudge Security The SaaS supply chain is a hot mesh. As your workforce introduces new SaaS apps and integrations, hidden pathways are created that attackers can exploit to gain access to core business systems. That's exactly what happened in the Drift breach, and it will happen again. But, all is not lost. Nudge Security gives you the visibility and control you need to stop these attacks. Within minutes of starting a free trial, you'll discover every SaaS app and integration in your environment, map your SaaS supply chain, and identify risky OAuth grants that could be exploited. The best part? Nudge Security alerts you of breaches impacting your 3rd and 4th party SaaS providers. That's right, even 4th party! So, you can take action quickly to limit the ripple effects. Learn how Nudge can help you secure your entire SaaS ecosystem at nudgesecurity.com/supplychain
Topics covered in this episode: * PostgreSQL 18 Released* * Testing is better than DSA (Data Structures and Algorithms)* * Pyrefly in Cursor/PyCharm/VSCode/etc* * Playwright & pytest techniques that bring me joy* Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: PostgreSQL 18 Released PostgreSQL 18 is out (Sep 25, 2025) with a focus on faster text handling, async I/O, and easier upgrades. New async I/O subsystem speeds sequential scans, bitmap heap scans, and vacuum by issuing concurrent reads instead of blocking on each request. Major-version upgrades are smoother: pg_upgrade retains planner stats, adds parallel checks via -jobs, and supports faster cutovers with -swap. Smarter query performance lands with skip scans on multicolumn B-tree indexes, better OR optimization, incremental-sort merge joins, and parallel GIN index builds. Dev quality-of-life: virtual generated columns enabled by default, a uuidv7() generator for time-ordered IDs, and RETURNING can expose both OLD and NEW. Security gets an upgrade with native OAuth 2.0 authentication; MD5 password auth is deprecated and TLS controls expand. Text operations get a boost via the new PG_UNICODE_FAST collation, faster upper/lower, a casefold() helper, and clearer collation behavior for LIKE/FTS. Brian #2: Testing is better than DSA (Data Structures and Algorithms) Ned Batchelder If you need to grind through DSA problems to get your first job, then of course, do that, but if you want to prepare yourself for a career, and also stand out in job interviews, learn how to write tests. Testing is a skill you'll use constantly, will make you stand out in job interviews, and isn't taught well in school (usually). Testing code well is not obvious. It's a puzzle and a problem to solve. It gives you confidence and helps you write better code. Applies everywhere, at all levels. Notes from Brian Most devs suck at testing, so being good at it helps you stand out very quickly. Thinking about a system and how to test it often very quickly shines a spotlight on problem areas, parts with not enough specification, and fuzzy requirements. This is a good thing, and bringing up these topics helps you to become a super valuable team member. High level tests need to be understood by key engineers on a project. Even if tons of the code is AI generated. Even if many of the tests are, the people understanding the requirements and the high level tests are quite valuable. Michael #3: Pyrefly in Cursor/PyCharm/VSCode/etc Install the VSCode/Cursor extension or PyCharm plugin, see https://pyrefly.org/en/docs/IDE/ Brian spoke about Pyrefly in #433: Dev in the Arena I've subsequently had the team on Talk Python: #523: Pyrefly: Fast, IDE-friendly typing for Python (podcast version coming in a few weeks, see video for now.) My experience has been Pyrefly changes the feel of the editor, give it a try. But disable the regular language server extension. Brian #4: Playwright & pytest techniques that bring me joy Tim Shilling “I've been working with playwright more often to do end to end tests. As a project grows to do more with HTMX and Alpine in the markup, there's less unit and integration test coverage and a greater need for end to end tests.” Tim covers some cool E2E techniques Open new pages / tabs to be tested Using a pytest marker to identify playwright tests Using a pytest marker in place of fixtures Using page.pause() and Playwright's debugging tool Using assert_axe_violations to prevent accessibility regressions Using page.expect_response() to confirm a background request occurred From Brian Again, with more and more lower level code being generated, and many unit tests being generated (shakes head in sadness), there's an increased need for high level tests. Don't forget API tests, obviously, but if there's a web interface, it's gotta be tested. Especially if the primary user experience is the web interface, building your Playwright testing chops helps you stand out and let's you test a whole lot of your system with not very many tests. Extras Brian: Big O - By Sam Who Yes, take Ned's advice and don't focus so much on DSA, focus also on learning to test. However, one topic you should be comfortable with in algortithm-land is Big O, at least enough to have a gut feel for it. And this article is really good enough for most people. Great graphics, demos, visuals. As usual, great content from Sam Who, and a must read for all serious devs. Python 3.14.0rc3 has been available since Sept 18. Python 3.14.0 final scheduled for Oct 7 Django 6.0 alpha 1 released Django 6.0 final scheduled for Dec 3 Python Test Static hosting update Some interesting discussions around setting up my own server, but this seems like it might be yak shaving procrastination research when I really should be writing or coding. So I'm holding off until I get some writing projects and a couple SaaS projects further along. Joke: Always be backing up
Most lenders are still treating bank data like a second-look tool. That's a missed opportunity.Open banking has changed, and so has the way lenders can use cashflow data to make smarter, faster credit decisions. But with so many different aggregators, confusing connections, and news from the CFPB and JPMorgan, it can be tough to figure out what really matters.That's where this conversation comes in.GDS Link and Quiltt teamed up for an open, straightforward discussion about what lenders should be doing with bank data right now. We'll talk about:What the shift from screen scraping to OAuth really means for your teamHigh-impact use cases that go beyond second-look and drive real ROIHow to get started when you've got limited resources and no room for trial and errorWhat small lenders can learn from big players, even without a large budgetA straightforward look at the CFPB and Chase news, without the hype
Episode SummaryCan multi-factor authentication really “solve” security, or are attackers already two steps ahead? In this episode of The Secure Developer, we sit down with Paul Querna, CTO and co-founder at ConductorOne, to unpack the evolving landscape between authentication and authorisation. In our conversation, Paul delves into the difference between authorisation and authentication, why authorisation issues have only been solved for organisations that invest properly, and why that progress has pushed attackers toward session theft and abusing standing privilege.Show NotesIn this episode of The Secure Developer, host Danny Allan sits down with Paul Querna, CTO and co-founder of ConductorOne, to discuss the evolving landscape of identity and access management (IAM). The conversation begins by challenging the traditional assumption that multi-factor authentication (MFA) is a complete solution, with Paul explaining that while authentication is "solved-ish," attackers are now moving to steal sessions and exploit authorization weaknesses. He shares his journey into the identity space, which began with a realization that old security models based on firewalls and network-based trust were fundamentally broken.The discussion delves into the critical concept of least privilege, a core pillar of the zero-trust movement. Paul highlights that standing privilege—where employees accumulate access rights over time—is a significant risk that attackers are increasingly targeting, as evidenced by reports like the Verizon Data Breach Investigations Report. This is even more critical with the rise of AI, where agents could potentially have overly broad access to sensitive data. They explore the idea of just-in-time authorization and dynamic access control, where privileges are granted for a specific use case and then revoked, a more mature approach to security.Paul and Danny then tackle the provocative topic of using AI to control authorization. While they agree that AI-driven decisions are necessary to maintain user experience and business speed, they acknowledge that culturally, we are not yet ready to fully trust AI with such critical governance decisions. They discuss how AI could act as an orchestrator, making recommendations for low-risk entitlements while high-risk ones remain policy-controlled. Paul also touches on the complexity of this new world, with non-human identities, personal productivity agents, and the need for new standards like extensions to OAuth. The episode concludes with Paul sharing his biggest worries and hopes for the future. He is concerned about the speed of AI adoption outpacing security preparedness, but is excited by the potential for AI to automate away human toil, empowering IAM and security teams to focus on strategic, high-impact work that truly secures the organization.LinksConductorOneVerizon Data Breach Investigations ReportAWS CloudWatchSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
“You can have the best program in the world, but if nobody knows about it, it won't make a difference,” says Todd Jordan, who leads United Way of Greater Kansas City's 2-1-1. “That's why we run a 24/7/365 contact center—to guide people to real help with a kind, empathetic voice.” In this special Technology Reseller News podcast, Publisher Doug Green brings together Todd Jordan (United Way 2-1-1, Kansas City), Jill Blankenship (CEO, Frontline Group), and Thomas McCarthy-Howe (CTO, VCONIC) to explore Empathy at Scale: how vCon (styled vCon) data and AI—implemented with strict privacy and security—are transforming community helplines and complex, multi-agency referrals. The Scale - and the Strain United Way's 2-1-1 covers 23 counties and roughly 2.5 million people across the Greater Kansas City region. Demand has surged since the pandemic: 155,000+ calls last year and nearly 500,000 total contacts (calls, web, email, even USPS), with average call times around 7.5 minutes—well over a million minutes of conversations. The mix spans urban, suburban, and rural needs, multiple languages, and highly sensitive situations (from rent and utilities to domestic violence and mental health crises). Protecting privacy is paramount. From Corridor Conversation to Pilot Blankenship describes how a hallway conversation about vCon—a new IETF-developed file format for conversations—sparked a collaboration. Frontline Group packaged the idea inside Frontline Quest, their agent-enablement and professional services program, while VCONIC, a spin-out dedicated to vCon technology, provided the protocol and secure data handling. The trio launched a live pilot with United Way 2-1-1 to transcribe calls, structure insights, and surface actionable “signals” for quality, safety, and service improvement—without compromising caller confidentiality. “vCon is designed to feed AI and protect people,” says Thomas McCarthy-Howe. “Bringing IETF-grade security and openness to conversational data lets us see the dark operational signals—safely—and use them to help people faster.” What Changed for 2-1-1 Quality & Care Signals: Real-time indicators help supervisors coach empathy, spotting where agents can lean in—and where secondary trauma support is needed for frontline staff. Searchable Conversations (Not Just Dispositions): Instead of relying on boxes and notes, leaders can now query full conversations to answer urgent policy questions. Jordan asked the system to compare eviction-prevention resources across Kansas vs. Missouri; the synthesized, data-grounded view matched the team's lived experience and revealed precise gaps. Multilingual & Multichannel Reality: With 70–80 languages in some school districts, vCon-backed transcription and analysis improve consistency across interpreters and channels—phone, web, email, and more. Why It Matters For a nonprofit with finite resources, the team needed technology that is secure, lean, and humane—helping callers in crisis without forcing agents to split attention between empathy and note-taking. The pilot is doing exactly that: safeguarding sensitive data while unlocking insights that mobilize funding, target interventions, and strengthen outcomes. “We're at the tip of something transformative,” Jordan says. “Real-time data from our community voices helps us advocate better—and care better.” About the participants: United Way of Greater Kansas City 2-1-1 serves 23 counties and ~2.5M people, fielding 155k+ calls annually. 2-1-1 is a North American network covering ~99% of the U.S. and much of Canada. Frontline Group is a contact center BPO and professional services firm; its Frontline Quest program integrates vCon to enhance agent experience and operational insight. VCONIC specializes in vCon technology—a conversation file format being developed in the IETF, the internet standards body behind protocols like TLS and OAuth. Learn more: United Way 2-1-1 (Kansas City),
Alex Salazar, co-founder and CEO of Arcade.dev, joins the show to unpack the realities of building enterprise agents. Conceptually simple but technically hard, agents are reshaping how companies think about workflow automation, security, and human-in-the-loop design. Alex shares why moving from proof-of-concept to production is so challenging, what playbooks actually work, and how enterprises can avoid wasting time and money as this technology accelerates faster than any previous wave.Key TakeawaysEnterprise agents aren't chatbots—they're workflow systems that can take secure, authorized actions.The real challenge isn't just building demos but getting to production-grade consistency and accuracy.Mid-market companies face the steepest climb: limited budgets, limited ML expertise, but the same competitive pressure.Success starts with finding low-risk, high-impact opportunities and narrowing scope as much as possible.Authorization is the biggest blocker today; delegated OAuth models are key to unlocking real agent functionality.Timestamped Highlights02:02 — Why agents are “just advanced workflow software” but harder to trust than traditional apps04:53 — The gap between glorified chatbots and real enterprise agents that take action09:58 — From cloud mistrust to wire transfers: how comfort with automation evolves14:00 — Chaos at every tier: startups, enterprises, and why the mid-market struggles most26:21 — The playbook: how to pick use cases, narrow scope, and carry pilots all the way to prod34:38 — Breaking down agent authorization and why most RAG systems fail in practice42:09 — Adoption at double speed: what makes this AI wave different from internet and cloudA Thought That Stuck“An agent isn't an agent until it can take action. If all it does is talk, it's just a chatbot.” — Alex SalazarCall to ActionIf this episode gave you a clearer lens on enterprise agents, share it with a colleague who needs to hear it. And don't miss future conversations—follow The Tech Trek on Apple Podcasts, Spotify, or wherever you listen.
- One of the biggest SaaS security incidents recently of course is the Salesloft Drive/Salesforce incident, which impacted hundreds of organizations and involved compromised OAuth tokens. Can you tell us a bit about the incident and the fallout?- In an AppOmni blog on the incident, you all discuss attackers taking advantage of persistent OAuth access, over-permissive access, limited monitoring, and unsecured secrets. Why do these problems continue to plague organizations despite incidents like this?This is part of a broader trend of increased SaaS supply chain attacks. What makes these attacks so enticing for malicious actors and challenging for organizations to prevent entirely?You recently published your State of SaaS Security Report, which projects SaaS to grow 20% YoY between 2025 and 2032. This is despite 75% of organizations reporting a SaaS security incident in the past year. Why do you think we're seeing continued growth in adoption but still lagging in SaaS security to accompany the adoption?The report discusses the rise of NHIs and GenAI and how this will exacerbate problems around SaaS Access and incidents. Can you unpack that for us?I was shocked to see the report find that just 13% of organizations use SSPM tooling despite SaaS's widespread adoption. When you talk to enterprises, for example, nearly everyone is doing some CSPM activity for IaaS. Why are so many neglecting hygiene and posture for their SaaS footprint?
Microsoft MVP Emanuel Palm joins The PowerShell Podcast to share his journey from managing printers in Sweden to being a Microsoft MVP who is automating the cloud with PowerShell and Azure. He talks about building the AZAuth module for OAuth authentication, using GitHub Actions for CI/CD, and the importance of blogging and community involvement. Plus, Emanuel reveals his unique side hobby... roasting coffee! Key Takeaways From printers to the cloud: Emanuel's career shows how PowerShell can open doors, from automating IT tasks to driving cloud automation and DevOps practices. Community and sharing matter: Blogging, presenting, and contributing help you grow your own understanding while creating opportunities for others. Automation and authentication: With tools like GitHub Actions and his AZAuth module, Emanuel demonstrates how to simplify workflows and securely interact with APIs. Guest Bio Emanuel Palm is a Microsoft MVP based in Sweden, where he is a consultant focused on Microsoft technologies and is active in the PowerShell community. Emanuel is the creator of the AZAuth module, a lightweight solution for handling OAuth authentication in PowerShell, and a frequent speaker at events like PowerShell Conference Europe. Beyond tech, Emanuel is a coffee enthusiast who even roasts his own beans as a side hobby. Resource Links Emanuel's Blog: https://pipe.how GitHub – Emanuel Palm: https://github.com/palmemanuel X / BlueSky: @palmemanuel AZAuth Module on GitHub: https://github.com/PalmEmanuel/AzAuth Emanuel's PS Wednesday: https://www.youtube.com/watch?v=trP2LLDynA0 Arkanum Coffee (Emanuel's hobby project): https://arkanum.coffee PDQ Discord: https://discord.gg/pdq Connect with Andrew: https://andrewpla.tech/links The PowerShell Podcast on YouTube: https://youtu.be/-uHHGVH1Kcc The PowerShell Podcast hub: https://pdq.com/the-powershell-podcast
Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.com00:00 - PreShow Banter™ — It's 8ft skeleton season.02:18 - BHIS - Talkin' Bout [infosec] News 2025-09-0203:07 - Story # 1: Salesloft breached to steal OAuth tokens for Salesforce data-theft attacks07:35 - Story # 2: DSLRoot, Proxies, and the Threat of ‘Legal Botnets'13:46 - Story # 3: Attackers Abuse Velociraptor Forensic Tool to Deploy Visual Studio Code for C2 Tunneling17:44 - Story # 4: Ransomware crooks knock Swedish municipalities offline for measly sum of $168K19:39 - Story # 5: As crippling cyberattack against Nevada continues, Lombardo says ‘we're working through it.'20:56 - Story # 6: Citrix forgot to tell you CVE-2025–6543 has been used as a zero day since May 202522:43 - Story # 7: NetScaler ADC and NetScaler Gateway Security Bulletin for CVE-2025-7775, CVE-2025-7776 and CVE-2025-842425:20 - Story # 8: First known AI-powered ransomware uncovered by ESET Research30:00 - Story # 9: In the rush to adopt hot new tech, security is often forgotten. AI is no exception32:06 - Story # 10: TransUnion suffers data breach impacting over 4.4 million people34:17 - Story # 11: ChickenSec FollowUp: Artificial Intelligence: The other AI35:20 - Story # 12: They weren't lovin' it - hacker cracks McDonald's security in quest for free nuggets, and it was apparently not too tricky39:29 - Identify the birds you see or hear with Merlin Bird ID40:04 - Story # 13: Detecting and countering misuse of AI: August 202551:31 - Story # 14: I'm a Stanford student. A Chinese agent tried to recruit me as a spy
On this week's show Patrick Gray and Adam Boileau discuss the week's cybersecurity news, including: The Salesloft breach and why OAuth soup is a problem The Salt Typhoon telco hackers turn out to be Chinese private sector, but state-directed Google says it will stand up a “disruption unit” Microsoft writes up a ransomware gang that's all-in on the cloud future Aussie firm hot-mics its work-from-home employees' laptops Youtube scam baiters help the feds take down a fraud ring This episode is sponsored by Dropzone.AI. Founder and CEO Edward Wu joins the show to talk about how AI driven SOC tools can help smaller organisations claw their way above the “security poverty line”. A dedicated monitoring team, threat hunting and alert triage, in a company that only has a couple of part time infosec people? Yes please! This episode is also available on Youtube. Show notes The Ongoing Fallout from a Breach at AI Chatbot Maker Salesloft – Krebs on Security Salesloft: The Leading AI Revenue Orchestration Platform Palo Alto Networks, Zscaler customers impacted by supply chain attacks | Cybersecurity Dive The impact of the Salesloft Drift breach on Cloudflare and our customers China used three private companies to hack global telecoms, U.S. says CSA_COUNTERING_CHINA_STATE_ACTORS_COMPROMISE_OF_NETWORKS.PDF Google previews cyber ‘disruption unit' as U.S. government, industry weigh going heavier on offense | CyberScoop Ransomware gang takedowns causing explosion of new, smaller groups | The Record from Recorded Future News Hundreds of Swedish municipalities impacted by suspected ransomware attack on IT supplier | The Record from Recorded Future News Storm-0501's evolving techniques lead to cloud-based ransomware | Microsoft Security Blog The Era of AI-Generated Ransomware Has Arrived | WIRED Between Two Nerds: How threat actors are using AI to run wild - YouTube Affiliates Flock to ‘Soulless' Scam Gambling Machine – Krebs on Security UK sought broad access to Apple customers' data, court filing suggests ICE reactivates contract with spyware maker Paragon | TechCrunch WhatsApp fixes 'zero-click' bug used to hack Apple users with spyware | TechCrunch Safetrac turned staff laptops into covert recording devices to monitor WFH Risky Bulletin: YouTubers unmask and help dismantle giant Chinese scam ring - Risky Business Media
John Capobianco is back! Just months after our first Model Context Protocol (MCP) discussion, John returns to showcase how this “USB-C of software” has transformed from experimental technology to an enterprise-ready solutions. We explore the game-changing OAuth 2.1 security updates, witness live demonstrations of packet analysis through natural language with Gemini CLI, and discover how... Read more »
From SAML to OAuth to FIDO2 to passwordless promises, we unpack what's working—and what's broken—in the world of identity and authentication. Today on the Packet Protector podcast, we're joined by the always thoughtful and occasionally provocative Wolf Goerlich, former Duo advisor, and now a practicing CISO in the public sector. We also talk about authorization... Read more »