American technology company
POPULARITY
Categories
After a year tangled in political drama, AI hype, and regulation battles, the TWiT crew explains how many of tech's "biggest stories" simply fizzled into nothing or left us with new headaches by year's end. • Year-end tech trends: AI, politics, and security dominated 2025 • Major stories faded fast: TikTok saga, political tech drama, DOGE scandal • TikTok's ownership battle—Oracle, Trump donors, and US-China tensions • China tech fears: banned drones, IoT vulnerabilities, secret radios in buses • Rising political pressure for internet privacy and media literacy reform • Surveillance and kill switch concerns in US grid and port infrastructure • Convenience vs. privacy: Americans trade data for discounts and ease • Age verification, surveillance, and flawed facial recognition across countries • Discord's ID leak highlights risks of rushed compliance with privacy laws • Social media's impact on kids pushes age-gating and verification laws • ISPs monetize customer data, VPNs pitched for personal privacy • Global government crackdowns: UK bans VPN advertising, mandates age checks • The illusion of absolute privacy: flawed age gates and persistent tracking • AI takes over: explosive growth, but profits elusive for big players • Arms race in LLMs: DeepSeek's breakthrough, OpenAI/Meta talent bidding war • Ad-driven models still rule; Amazon's playbook repeated in AI • Humanoid robots and AGI hype: skepticism vs. Silicon Valley optimism • AI-generated art, media, and the challenge of deepfake detection • Social platforms falter: Instagram and X swamped by fake or low-value content • Google's legal, regulatory, and technical woes: ad tech trial, Manifest V3 backlash • RAM price spikes and hardware shortages blamed on AI data center demand • YouTube overtakes mobile for podcast and video viewing, Oscars move online • The internet's growth: Cloudflare stats, X vs. Reddit, spam domain trends • Weird tech stories: hacked crosswalks, Nintendo Switch 2 Staplegate, LEGO theft ring • Sad farewell: Lamar Wilson's passing and mental health awareness in tech • Reflections on the year's turbulence and hopes for a better 2026 Host: Leo Laporte Guests: Mikah Sargent, Paris Martineau, and Steve Gibson Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: expressvpn.com/twit zscaler.com/security Melissa.com/twit ventionteams.com/twit auraframes.com/ink
After a year tangled in political drama, AI hype, and regulation battles, the TWiT crew explains how many of tech's "biggest stories" simply fizzled into nothing or left us with new headaches by year's end. Year-end tech trends: AI, politics, and security dominated 2025 Major stories faded fast: TikTok saga, political tech drama, DOGE scandal TikTok's ownership battle—Oracle, Trump donors, and US-China tensions China tech fears: banned drones, IoT vulnerabilities, secret radios in buses Rising political pressure for internet privacy and media literacy reform Surveillance and kill switch concerns in US grid and port infrastructure Convenience vs. privacy: Americans trade data for discounts and ease Age verification, surveillance, and flawed facial recognition across countries Discord's ID leak highlights risks of rushed compliance with privacy laws Social media's impact on kids pushes age-gating and verification laws ISPs monetize customer data, VPNs pitched for personal privacy Global government crackdowns: UK bans VPN advertising, mandates age checks The illusion of absolute privacy: flawed age gates and persistent tracking AI takes over: explosive growth, but profits elusive for big players Arms race in LLMs: DeepSeek's breakthrough, OpenAI/Meta talent bidding war Ad-driven models still rule; Amazon's playbook repeated in AI Humanoid robots and AGI hype: skepticism vs. Silicon Valley optimism AI-generated art, media, and the challenge of deepfake detection Social platforms falter: Instagram and X swamped by fake or low-value content Google's legal, regulatory, and technical woes: ad tech trial, Manifest V3 backlash RAM price spikes and hardware shortages blamed on AI data center demand YouTube overtakes mobile for podcast and video viewing, Oscars move online The internet's growth: Cloudflare stats, X vs. Reddit, spam domain trends Weird tech stories: hacked crosswalks, Nintendo Switch 2 Staplegate, LEGO theft ring Sad farewell: Lamar Wilson's passing and mental health awareness in tech Reflections on the year's turbulence and hopes for a better 2026 Host: Leo Laporte Guests: Mikah Sargent, Paris Martineau, and Steve Gibson Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: expressvpn.com/twit zscaler.com/security Melissa.com/twit ventionteams.com/twit auraframes.com/ink
After a year tangled in political drama, AI hype, and regulation battles, the TWiT crew explains how many of tech's "biggest stories" simply fizzled into nothing or left us with new headaches by year's end. Year-end tech trends: AI, politics, and security dominated 2025 Major stories faded fast: TikTok saga, political tech drama, DOGE scandal TikTok's ownership battle—Oracle, Trump donors, and US-China tensions China tech fears: banned drones, IoT vulnerabilities, secret radios in buses Rising political pressure for internet privacy and media literacy reform Surveillance and kill switch concerns in US grid and port infrastructure Convenience vs. privacy: Americans trade data for discounts and ease Age verification, surveillance, and flawed facial recognition across countries Discord's ID leak highlights risks of rushed compliance with privacy laws Social media's impact on kids pushes age-gating and verification laws ISPs monetize customer data, VPNs pitched for personal privacy Global government crackdowns: UK bans VPN advertising, mandates age checks The illusion of absolute privacy: flawed age gates and persistent tracking AI takes over: explosive growth, but profits elusive for big players Arms race in LLMs: DeepSeek's breakthrough, OpenAI/Meta talent bidding war Ad-driven models still rule; Amazon's playbook repeated in AI Humanoid robots and AGI hype: skepticism vs. Silicon Valley optimism AI-generated art, media, and the challenge of deepfake detection Social platforms falter: Instagram and X swamped by fake or low-value content Google's legal, regulatory, and technical woes: ad tech trial, Manifest V3 backlash RAM price spikes and hardware shortages blamed on AI data center demand YouTube overtakes mobile for podcast and video viewing, Oscars move online The internet's growth: Cloudflare stats, X vs. Reddit, spam domain trends Weird tech stories: hacked crosswalks, Nintendo Switch 2 Staplegate, LEGO theft ring Sad farewell: Lamar Wilson's passing and mental health awareness in tech Reflections on the year's turbulence and hopes for a better 2026 Host: Leo Laporte Guests: Mikah Sargent, Paris Martineau, and Steve Gibson Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: expressvpn.com/twit zscaler.com/security Melissa.com/twit ventionteams.com/twit auraframes.com/ink
Xmas Special: Software Industry Transformation - Why Software Development Must Mature Welcome to the 2025 Xmas special - a five-episode deep dive into how software as an industry needs to transform. In this opening episode, we explore the fundamental disconnect between how we manage software and what software actually is. From small businesses to global infrastructure, software has become the backbone of modern society, yet we continue to manage it with tools designed for building ships in the 1800s. This episode sets the stage for understanding why software development must evolve into a mature discipline. Software Runs Everything Now "Without any single piece, I couldn't operate - and I'm tiny. Scale this reality up: software isn't just in tech companies anymore." Even the smallest businesses today run entirely on software infrastructure. A small consulting and media business depends on WordPress for websites, Kajabi for courses, Stripe for payments, Quaderno for accounting, plus email, calendar, CRM systems, and AI assistants for content creation. The challenge? We're managing this critical infrastructure with tools designed for building physical structures with fixed requirements - an approach that fundamentally misunderstands what software is and how it evolves. This disconnect has to change. The Oscillation Between Technology and Process "AI amplifies our ability to create software, but doesn't solve the fundamental process problems of maintaining, evolving, and enhancing that software over its lifetime." Software improvement follows a predictable pattern: technology leaps forward, then processes must adapt to manage the new complexity. In the 1960s-70s, we moved from machine code to COBOL and Fortran, which was revolutionary but led to the "software crisis" when we couldn't manage the resulting complexity. This eventually drove us toward structured programming and object-oriented programming as process responses, which, in turn, resulted in technology changes! Today, AI tools like GitHub Copilot, ChatGPT, and Claude make writing code absurdly easy - but writing code was never the hard part. Robert Glass documents in "Facts and Fallacies of Software Engineering" that maintenance typically consumes between 40 and 80 percent of software costs, making "maintenance" probably the most important life cycle phase. We're overdue for a process evolution that addresses the real challenge: maintaining, evolving, and enhancing software over its lifetime. Software Creates An Expanding Possibility Space "If they'd treated it like a construction project ('ship v1.0 and we're done'), it would never have reached that value." Traditional project management assumes fixed scope, known solutions, and a definable "done" state. The Sydney Opera House exemplifies this: designed in 1957, completed in 1973, ten times over budget, with the architect resigning - but once built, it stands with "minimal" (compared to initial cost) maintenance. Software operates fundamentally differently. Slack started as an internal tool for a failed gaming company called Glitch in 2013. When the game failed, they noticed their communication tool was special and pivoted entirely. After launching in 2014, Slack continuously evolved based on user feedback: adding threads in 2017, calls in 2016, workflow builder in 2019, and Canvas in 2023. Each addition changed what was possible in organizational communication. In 2021, Salesforce acquired Slack for $27.7 billion precisely because it kept evolving with user needs. The key difference is that software creates possibility space that didn't exist before, and that space keeps expanding through continuous evolution. Software Is Societal Infrastructure "This wasn't a cyber attack - it was a software update gone wrong." Software has become essential societal infrastructure, not optional and not just for tech companies. In July 2024, a faulty software update from cybersecurity firm CrowdStrike crashed 8.5 million Windows computers globally. Airlines grounded flights, hospitals canceled surgeries, banks couldn't process transactions, and 911 services went down. The global cost exceeded $10 billion. This wasn't an attack - it was a routine update that failed catastrophically. AWS outages in 2021 and 2023 took down major portions of the internet, stopping Netflix, Disney+, Robinhood, and Ring doorbells from working. CloudFlare outages similarly cascaded across daily-use services. When software fails, society fails. We cannot keep managing something this critical with tools designed for building physical things with fixed requirements. Project management was brilliant for its era, but that era isn't this one. The Path Ahead: Four Critical Challenges "The software industry doesn't just need better tools - it needs to become a mature discipline." This five-episode series will address how we mature as an industry by facing four critical challenges: Episode 2: The Project Management Trap - Why we think in terms of projects, dates, scope, and "done" when software is never done, and how this mindset prevents us from treating software as a living capability Episode 3: What's Already Working - The better approaches we've already discovered, including iterative delivery, feedback loops, and continuous improvement, with real examples of companies doing this well Episode 4: The Organizational Immune System - Why better approaches aren't universal, how organizations unconsciously resist what would help them, and the hidden forces preventing adoption Episode 5: Software-Native Organizations - What it means to truly be a software-native organization, transforming how the business thinks, not just using agile on teams Software is too important to our society to keep getting it wrong. We have much of the knowledge we need - the challenge is adoption and evolution. Over the next four episodes, we'll build this case together, starting with understanding why we keep falling into the same trap. References For Further Reading Glass, Robert L. "Facts and Fallacies of Software Engineering" - Fact 41, page 115 CrowdStrike incident: https://en.wikipedia.org/wiki/2024_CrowdStrike_incident AWS outages: 2021 (Dec 7), 2023 (June 13), and November 2025 incidents CloudFlare outages: 2022 (June 21), and November 2025 major incident Slack history and Salesforce acquisition: https://en.wikipedia.org/wiki/Slack_(software) Sydney Opera House: https://en.wikipedia.org/wiki/Sydney_Opera_House About Vasco Duarte Vasco Duarte is a thought leader in the Agile space, co-founder of Agile Finland, and host of the Scrum Master Toolbox Podcast, which has over 10 million downloads. Author of NoEstimates: How To Measure Project Progress Without Estimating, Vasco is a sought-after speaker and consultant helping organizations embrace Agile practices to achieve business success. You can link with Vasco Duarte on LinkedIn.
After a year tangled in political drama, AI hype, and regulation battles, the TWiT crew explains how many of tech's "biggest stories" simply fizzled into nothing or left us with new headaches by year's end. Year-end tech trends: AI, politics, and security dominated 2025 Major stories faded fast: TikTok saga, political tech drama, DOGE scandal TikTok's ownership battle—Oracle, Trump donors, and US-China tensions China tech fears: banned drones, IoT vulnerabilities, secret radios in buses Rising political pressure for internet privacy and media literacy reform Surveillance and kill switch concerns in US grid and port infrastructure Convenience vs. privacy: Americans trade data for discounts and ease Age verification, surveillance, and flawed facial recognition across countries Discord's ID leak highlights risks of rushed compliance with privacy laws Social media's impact on kids pushes age-gating and verification laws ISPs monetize customer data, VPNs pitched for personal privacy Global government crackdowns: UK bans VPN advertising, mandates age checks The illusion of absolute privacy: flawed age gates and persistent tracking AI takes over: explosive growth, but profits elusive for big players Arms race in LLMs: DeepSeek's breakthrough, OpenAI/Meta talent bidding war Ad-driven models still rule; Amazon's playbook repeated in AI Humanoid robots and AGI hype: skepticism vs. Silicon Valley optimism AI-generated art, media, and the challenge of deepfake detection Social platforms falter: Instagram and X swamped by fake or low-value content Google's legal, regulatory, and technical woes: ad tech trial, Manifest V3 backlash RAM price spikes and hardware shortages blamed on AI data center demand YouTube overtakes mobile for podcast and video viewing, Oscars move online The internet's growth: Cloudflare stats, X vs. Reddit, spam domain trends Weird tech stories: hacked crosswalks, Nintendo Switch 2 Staplegate, LEGO theft ring Sad farewell: Lamar Wilson's passing and mental health awareness in tech Reflections on the year's turbulence and hopes for a better 2026 Host: Leo Laporte Guests: Mikah Sargent, Paris Martineau, and Steve Gibson Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: expressvpn.com/twit zscaler.com/security Melissa.com/twit ventionteams.com/twit auraframes.com/ink
After a year tangled in political drama, AI hype, and regulation battles, the TWiT crew explains how many of tech's "biggest stories" simply fizzled into nothing or left us with new headaches by year's end. Year-end tech trends: AI, politics, and security dominated 2025 Major stories faded fast: TikTok saga, political tech drama, DOGE scandal TikTok's ownership battle—Oracle, Trump donors, and US-China tensions China tech fears: banned drones, IoT vulnerabilities, secret radios in buses Rising political pressure for internet privacy and media literacy reform Surveillance and kill switch concerns in US grid and port infrastructure Convenience vs. privacy: Americans trade data for discounts and ease Age verification, surveillance, and flawed facial recognition across countries Discord's ID leak highlights risks of rushed compliance with privacy laws Social media's impact on kids pushes age-gating and verification laws ISPs monetize customer data, VPNs pitched for personal privacy Global government crackdowns: UK bans VPN advertising, mandates age checks The illusion of absolute privacy: flawed age gates and persistent tracking AI takes over: explosive growth, but profits elusive for big players Arms race in LLMs: DeepSeek's breakthrough, OpenAI/Meta talent bidding war Ad-driven models still rule; Amazon's playbook repeated in AI Humanoid robots and AGI hype: skepticism vs. Silicon Valley optimism AI-generated art, media, and the challenge of deepfake detection Social platforms falter: Instagram and X swamped by fake or low-value content Google's legal, regulatory, and technical woes: ad tech trial, Manifest V3 backlash RAM price spikes and hardware shortages blamed on AI data center demand YouTube overtakes mobile for podcast and video viewing, Oscars move online The internet's growth: Cloudflare stats, X vs. Reddit, spam domain trends Weird tech stories: hacked crosswalks, Nintendo Switch 2 Staplegate, LEGO theft ring Sad farewell: Lamar Wilson's passing and mental health awareness in tech Reflections on the year's turbulence and hopes for a better 2026 Host: Leo Laporte Guests: Mikah Sargent, Paris Martineau, and Steve Gibson Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: expressvpn.com/twit zscaler.com/security Melissa.com/twit ventionteams.com/twit auraframes.com/ink
After a year tangled in political drama, AI hype, and regulation battles, the TWiT crew explains how many of tech's "biggest stories" simply fizzled into nothing or left us with new headaches by year's end. Year-end tech trends: AI, politics, and security dominated 2025 Major stories faded fast: TikTok saga, political tech drama, DOGE scandal TikTok's ownership battle—Oracle, Trump donors, and US-China tensions China tech fears: banned drones, IoT vulnerabilities, secret radios in buses Rising political pressure for internet privacy and media literacy reform Surveillance and kill switch concerns in US grid and port infrastructure Convenience vs. privacy: Americans trade data for discounts and ease Age verification, surveillance, and flawed facial recognition across countries Discord's ID leak highlights risks of rushed compliance with privacy laws Social media's impact on kids pushes age-gating and verification laws ISPs monetize customer data, VPNs pitched for personal privacy Global government crackdowns: UK bans VPN advertising, mandates age checks The illusion of absolute privacy: flawed age gates and persistent tracking AI takes over: explosive growth, but profits elusive for big players Arms race in LLMs: DeepSeek's breakthrough, OpenAI/Meta talent bidding war Ad-driven models still rule; Amazon's playbook repeated in AI Humanoid robots and AGI hype: skepticism vs. Silicon Valley optimism AI-generated art, media, and the challenge of deepfake detection Social platforms falter: Instagram and X swamped by fake or low-value content Google's legal, regulatory, and technical woes: ad tech trial, Manifest V3 backlash RAM price spikes and hardware shortages blamed on AI data center demand YouTube overtakes mobile for podcast and video viewing, Oscars move online The internet's growth: Cloudflare stats, X vs. Reddit, spam domain trends Weird tech stories: hacked crosswalks, Nintendo Switch 2 Staplegate, LEGO theft ring Sad farewell: Lamar Wilson's passing and mental health awareness in tech Reflections on the year's turbulence and hopes for a better 2026 Host: Leo Laporte Guests: Mikah Sargent, Paris Martineau, and Steve Gibson Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: expressvpn.com/twit zscaler.com/security Melissa.com/twit ventionteams.com/twit auraframes.com/ink
In this episode of The Defiant podcast, Camila Russo sits down in Buenos Aires (Devconnect) with Marissa Foster (Product, Ethereum Foundation) and Yoav Weiss (security researcher, Ethereum Foundation) to unpack The Trustless Manifesto and the Ethereum Interop Layer (EIL), why “trust assumptions” are quietly creeping into Ethereum's stack, and what it will take to preserve Ethereum's core values while making UX actually usable.We dig into the hidden places users are forced to trust intermediaries, from cross-chain interoperability and solvers to something most people never question: RPCs. Then we get practical: the guests walk through the EIL, a new approach to cross-chain UX that aims to deliver one-signature interop without introducing new trust assumptions, plus why the wallet becomes the center of the user's security model.Finally, we zoom out: how should wallets warn users, what does “walkaway test” really mean, and why institutions may end up being one of the strongest forces pushing crypto toward less counterparty risk.Topic list: • Why Ethereum's next phase is “mainstream adoption” — and why that raises the stakes • The Trustless Manifesto: what it is, why it was written, and what it's trying to prevent • Where trust assumptions sneak in: bridges, interop protocols, sequencers, oracles • RPCs as a giant blind spot: “we trust RPCs blindly” and why that can have real-world consequences • Trustlessness vs UX: why “great values + bad UX” can still lose users • “You can't build something trustless on top of something that isn't trustless” • What users should demand — and why it can't require everyone to be a security expert • How “beat” frameworks help: L2BEAT, upcoming interop criteria, and Walletbeat • The walkaway test: what happens if the team/server/intermediary disappears (or turns hostile)? • L2 sequencers: permissioned vs permissionless, censorship risk, and practical exit paths • Cloud dependencies (Cloudflare outage) and what it reveals about today's “decentralized” apps • Ethereum Interop Layer (EIL) explained: one-signature, wallet-centric, self-executing interop • Why “solvers open the envelope” — and how EIL avoids that trust model • Liquidity providers, vouchers, and how users pay gas cross-chain without the usual friction • Standards and coordination: wallets, L2s, and dapps all need to meet in the middle • The HTTP analogy: Ethereum today as the “pre-HTTP internet” and what seamless interop could unlock • Institutions and counterparty risk: why big players may push hardest for trust-minimized infrastructure • What's next: testnet learnings, audits, standards, wallet integrations, and 2026 mainnet targetExplore The Defiant ✨
Dig into recent outages at Cloudflare and Venmo, and the growing challenge of maintaining reliability and resilience in a world of increasingly interdependent Internet infrastructure. ——— CHAPTERS 00:00 Intro 00:50 Cloudflare Outage 06:54 Venmo Outage 12:01 The Challenge of Layered Dependencies 13:36 Outage Trends: By the Numbers 15:28 Get in Touch ——— For additional insights, check out The Internet Outage Survival Kit: https://www.thousandeyes.com/resources/the-internet-outage-survival-kit?utm_source=wistia&utm_medium=referral&utm_campaign=fy26q2_internetreport_q2fy26ep5_podcast ——— Want to get in touch? If you have questions, feedback, or guests you would like to see featured on the show, send us a note at InternetReport@thousandeyes.com. Or follow us on LinkedIn or X: @thousandeyes ——— ABOUT THE INTERNET REPORT This is The Internet Report, a podcast uncovering what's working and what's breaking on the Internet—and why. Tune in to hear ThousandEyes' Internet experts dig into some of the most interesting outage events from the past couple weeks, discussing what went awry—was it the Internet, or an application issue? Plus, learn about the latest trends in ISP outages, cloud network outages, collaboration network outages, and more. Catch all the episodes on YouTube or your favorite podcast platform: - Apple Podcasts: https://podcasts.apple.com/us/podcast/the-internet-report/id1506984526 - Spotify: https://open.spotify.com/show/5ADFvqAtgsbYwk4JiZFqHQ?si=00e9c4b53aff4d08&nd=1&dlsi=eab65c9ea39d4773 - SoundCloud: https://soundcloud.com/ciscopodcastnetwork/sets/the-internet-report
Está no ar o último FIAP Decode do ano trazendo a retrospectiva das notícias tech que sacudiram 2025. André David, Mayumi Shingaki e Bruno Germano relembram a chegada do DeepSeek, o lançamento do Nintendo Switch 2, o megavazamento de dados da Google e os apagões da AWS e da Cloudflare, além de deixar suas apostas sobre o que está por vir em 2026. Decodifique novas conexões André David: Linkedin e InstagramMayumi Shingaki: Linkedin e Instagram Bruno Germano: Linkedin e Instagram NOTÍCIAS: DeepSeek chacoalha o mercado de IA Lançamento do Nintendo Switch 2 Megavazamento de dados da Google Os apagões da AWS e Cloudflare
Dopo l'uscita di Gemini 3 è codice rosso in OpenAI: l'azienda ha capito che la concorrenza esiste ed è più agguerrita che mai. Per questo è corsa ai ripari rilasciando GPT-5.2, il suo modello più potente, ma con alcuni dubbi sui benchmark.Nel frattempo nasce la Agentic AI Foundation, una partnership delle principali Big Tech (tra cui Anthropic, Google, Microsoft, OpenAI, Cloudflare e AWS) per standardizzare e governare gli agenti AI sotto la Linux Foundation.Infine, OpenAI e Disney stringono un accordo per consentire l'utilizzo dei personaggi Disney in Sora e portare l'AI nella piattaforma streaming della Casa di Topolino, non senza malcontento da parte di animatori e sindacati.Per altri contenuti sul mondo Tech, Data & AI, seguici sui nostri canali!
Jack Harrington sits down with Tanner Linsley to talk about the evolution of TanStack and where it's headed next. They explore how early projects like React Query and React Table influenced the headless philosophy behind TanStack Router, why virtualized lists matter at scale, and what makes forms in React so challenging. Tanner breaks down TanStack Start and its client-first approach to SSR, routing, and data loading, and shares his perspective on React Server Components, modern authentication tradeoffs, and composable tooling. The episode wraps with a look at TanStack's roadmap and what it takes to sustainably maintain open source at scale. We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com (mailto:elizabeth.becz@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Check out our newsletter (https://blog.logrocket.com/the-replay-newsletter/)! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Chapters 01:00 – What is TanStack? Contributors, projects, and mission 02:05 – React Query vs React Table: TanStack's origins 03:10 – TanStack principles: headless, cross-platform, type safety 03:45 – TanStack Virtual and large list performance 05:00 – Forms, abandoned libraries, and lessons learned 06:00 – Why TanStack avoids building auth 07:30 – Auth complexity, SSO, and enterprise realities 08:45 – Partnerships with WorkOS, Clerk, Netlify, and Cloudflare 09:30 – Introducing TanStack Start 10:20 – Client-first architecture and React Router DNA 11:00 – Pages Router nostalgia and migration paths 12:00 – Loaders, data-only routes, and seamless navigation 13:20 – Why data-only mode is a hidden superpower 14:00 – Built-in SWR-style caching and perceived speed 15:20 – Loader footguns and server function boundaries 16:40 – Isomorphic execution model explained 18:00 – Gradual adoption: router → file routing → Start 19:10 – Learning from Remix, Next.js, and past frameworks 20:30 – Full-stack React before modern meta-frameworks 22:00 – Server functions, HTTP methods, and caching 23:30 – Simpler mental models vs server components 25:00 – Donut holes, cognitive load, and developer experience 26:30 – Staying pragmatic and close to real users 28:00 – When not to use TanStack (Shopify, WordPress, etc.) 29:30 – Marketing sites, CMS pain, and team evolution 31:30 – Scaling realities and backend tradeoffs 33:00 – Static vs dynamic apps and framework fit 35:00 – Astro + TanStack Start hybrid architectures 36:20 – Composability with Hono, tRPC, and Nitro 37:20 – Why TanStack Start is a request handler, not a platform 38:50 – TanStack AI announcement and roadmap 40:00 – TanStack DB explained 41:30 – Start 1.0 status and real-world adoption 42:40 – Devtools, Pacer, and upcoming libraries 43:50 – Sustainability, sponsorships, and supporting maintainers 45:30 – How companies and individuals can support TanStack Special Guest: Tanner Linsley.
Jetzt hat's Sylvester erwischt und er ist erkältet - was ihn aber nicht davon abhält, in der neuen Folge von "Passwort" ausgiebig mit Christopher zu allerlei Security-Themen zu sprechen. Zunächst thematisieren die beiden mit etwas Humor ein Kuriosum, nämlich eine nicht hinlänglich verschlüsselnde Toilettenschüsselkamera zur Darmkrebs-Früherkennung. Dann erläutert Sylvester, was es mit der Sicherheitslücke "React2Shell" auf sich hat, die in den vergangenen Tagen für reichlich Furore sorgte und Hunderttausende Domains weltweit betrifft. Christopher hat dieses Mal gleich fünf PKI-Themen im Gepäck, zu denen Sylvester kurzerhand noch ein sechstes beisteuert und auch den Umbau von Tor mittels "Counter-Galois Onion" hat der c't-Redakteur sich angeschaut. Der Podcast verabschiedet sich mit dieser Folge in eine dreiwöchige Weihnachtspause - wer will, kann die Aufzeichnung der nächsten Folge live auf dem 39C3 miterleben. - React2Shell PoC: https://gist.github.com/maple3142/48bc9393f45e068cf8c90ab865c0f5f3 - XKCD: https://xkcd.com/1172/ - Cloudflare: https://blog.cloudflare.com/5-december-2025-outage/ - Logarchivierung für CT-Logs: https://groups.google.com/a/chromium.org/g/ct-policy/c/Y25hCTrCjDo - Wo überall Trust-Stores sitzen: https://heise.de/-9568002 - Tor vs Iran und Russland: https://blog.torproject.org/staying-ahead-of-censors-2025/ - Counter Galois Onion: https://blog.torproject.org/introducing-cgo/ - Folgt uns im Fediverse: * @christopherkunz@chaos.social * @syt@social.heise.de
Happy birthday to who? Find out on this week's PlayingFTSE Show!This show is the one we recorded, but a Cloudflare outage prevented us from airing it when it was originally due. So don't worry about market performance – just enjoy!It's been a while since we looked at Inditex – the parent company of Zara. But Steve D thinks it's worth coming back to see how things have been going. After a weak first half of the year, things are starting to pick up. So could this be an opportunity to buy shares in a retailer with an unusually strong position?Salesforce's share price has been unusually resilient in a stock market where software firms have been falling. So Steve W's taking a look at the firm's latest earnings.Revenues are up double digits and the outlook seems reasonable. But is this enough to justify a price-to-earnings (P/E) multiple of 30 in today's market?Crowdstrike continues to impress both Steves. It's a business that's built for durability in a world that continues to shift towards artificial intelligence, so why haven't they bought it?High valuation multiples haven't really held the stock back before now – the only recent issue has been an operational one. Given this, maybe they ought to get off the sidelines…Steve W's enthusiasm for acquisition-driven compounders continues with SDI Group – a small UK stock. It's down a lot due to cyclical pressures, but could that be an opportunity?A while ago, we talked about the idea that Judgest Scientific might be getting a bit big. We thought it it was ridiculous, but anyone who disagreed might want to check this one out…Only on this week's PlayingFTSE Podcast!► Get a free fractional share!This show is sponsored by Trading 212! To get free fractional shares worth up to 100 EUR / GBP, you can open an account with Trading 212 through this link https://www.trading212.com/Jdsfj/FTSE. Terms apply.When investing, your capital is at risk and you may get back less than invested.Past performance doesn't guarantee future results.► Get 15% OFF Fiscal.ai:Huge thanks to our sponsor, Fiscal.ai, the best investing toolkit we've discovered! Get 15% off your subscription with code below and unlock powerful tools to analyze stocks, discover hidden gems, and build income streams. Check them out at Fiscal.ai!https://fiscal.ai/?via=steve► Follow Us On Substack:Sign up for our Substack and get light-hearted, info-packed discussions on everything from market trends and investing psychology to deep dives into different asset classes. We'll analyze what makes the best investors tick and share insights that challenge your thinking while keeping things engaging.Don't miss out! Sign up today and start your journey with us.https://playingftse.substack.com/► Support the show:Appreciate the show and want to offer your support? You could always buy us a coffee at: https://ko-fi.com/playingftse(All proceeds reinvested into the show and not to coffee!)► Timestamps:0:00 INTRO & OUR WEEKS8:12 INDITEX24:37 SALESFORCE38:22 CROWDSTRIKE53:28 SDI GROUP► Show Notes:What's been going on in the financial world and why should anyone care? Find out as we dive into the latest news and try to figure out what any of it means. We talk about stocks, markets, politics, and loads of other things in a way that's accessible, light-hearted and (we hope) entertaining. For the people who know nothing, by the people who know even less. Enjoy► Wanna get in contact?Got a question for us? Drop it in the comments below or reach out to us on Instagram: https://www.instagram.com/playing_ftse/► Enquiries: Please email - playingftsepodcast@gmail(dot)com► Disclaimer: This information is for entertainment purposes only and does not constitute financial advice. Always consult with a qualified financial professional before making any investment decisions.
Avui, a l'InfoCopeLleida, parlarem de com el consum digital continua transformant tant la tecnologia com els nostres hàbits quotidians. Començarem mirant cap al futur de les pantalles, perquè LG s'avança al CES i presenta un nou tipus de televisió que promet una qualitat d'imatge mai vista. D'aquí saltarem al contingut, amb Prime Video llançant un canal de pel·lícules gratuït, sense subscripcions ni trampes. També posarem el focus en Internet, perquè el trànsit global ha crescut gairebé un 20% en només un any, segons Cloudflare. Veurem com les xarxes socials continuen conquerint espais nous, amb els reels d'Instagram arribant directament a la televisió. I acabarem amb dues notícies que ens toquen la butxaca i la seguretat: el futur iPhone plegable recuperant el Touch ID, i la relació cada cop més estreta entre Hisenda i Bizum.Efeméride:Un día como hoy, 17 de diciembre de 1993, la historia del videojuego dio un giro inesperado. Night Trap se convirtió en el primer videojuego retirado de las tiendas en Estados Unidos, acusado de violencia extrema tras una fuerte polémica política y mediática. Aquel caso abrió un debate que sigue vivo hoy: los videojuegos son, desde hace años, el mercado de entretenimiento que más factura en el mundo, incluso por encima del cine y la música. Y parece que, por ser ficción, todo vale… pero no es —ni debería ser— así. Lo curioso es que, en los últimos treinta años, muy pocos juegos han sido retirados realmente por violencia. La mayoría de los conflictos se han resuelto con clasificaciones por edades, censuras puntuales o retiradas digitales por motivos comerciales, no éticos. Por eso, más allá de leyes o plataformas, la clave sigue siendo la misma: responsabilidad familiar y sentido común, especialmente cuando las políticas no llegan —o miran hacia otro lado—. Porque al final… damos un vistazo al pasado, para entender el presente y mirar hacia el futuro.Gràcies per ser-hi un cop més, aquí, a l'InfoCopeLleida… on la tecnologia s'explica amb un somriure, i sense manual d'instruccions! 🎧
In this episode, Noel sits down with David Mytton, founder and CEO of Arcjet, to unpack the React2Shell vulnerability and why it became such a serious remote code execution risk for apps using React server components and Next.js. They explain how server-side features introduced in React 19 changed the attack surface, why cloud providers leaned on WAF mitigation instead of instant patching, and what this incident reveals about modern JavaScript supply chain risk. The conversation also covers dependency sprawl, rushed patches, and why security as a feature needs to start long before production. Links X: https://x.com/davidmytton Blog: https://davidmytton.blog Resources Multiple Threat Actors Exploit React2Shell: https://cloud.google.com/blog/topics/threat-intelligence/threat-actors-exploit-react2shell-cve-2025-55182 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com (mailto:elizabeth.becz@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Check out our newsletter (https://blog.logrocket.com/the-replay-newsletter/)! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Chapters
還暦を迎える商社マンが、社会人大学でAIではなくサイバーセキュリティを学び始めた理由とは。企業被害が相次ぐ日本の状況や、CODE BLUEで語られた「いま起きているトピック」を通して考えました00:53 還暦を迎えた商社マンがAIではなくサイバーセキュリティを学び始めた!なんで?02:43 資格のための勉強ではなく、泥臭い現場を経験したいとセキュリティを選択05:55 今年の日本はサイバーインシデント多発、アサヒビールやアスクルなど目に見える被害が続出07:06 ニューヨーク市がサイバーセキュリティに補助金 − ファイナンス企業が集まっているため08:05 日本発のセキュリティカンファレンス「CODE BLUE」がDEF CONに並ぶ国際的認知を得ている08:37 CODE BLUEの学生ボランティア最年少記録が14歳から13歳に更新、定員枠に3倍の応募09:45 北朝鮮のIT労働者に関する調査発表:外貨獲得で600億円規模になっているらしい10:58 北朝鮮のIT労働者に関する調査発表:500ドルでウェブサイトを作るフリーランスとして活動している12:03 AIで顔を置き換えて面談する者への対策として「顔に指をさしてもらう」という方法がある12:57 Xが投稿場所を表示したら愛国者アカウントと思われていたものが実はアメリカ国外からだったと判明12:51 電子書籍をダウンロードするだけでコントロールを取られるKindleの脆弱性14:11 Googleカレンダーの招待を受けるだけでルートを取られる脆弱性がある17:04 Cloudflareが落ちると日常的に良く使うサービスが連鎖的に停止してしまうことを実感18:29 太陽の黒点活動が活発になり、それに伴う電磁波の影響で人間界の設備など停止するものがあるエピソード内で取り上げた情報へのリンク: ep156(CODE BLUE 2025について話している回) CODE BLUE 2025テック業界で働く3人が、テクノロジーとクリエイティブに関するトピックを、視点を行き交わしながら語り合います。及川卓也 @takoratta プロダクトマネジメントとプロダクト開発組織づくりの専門家 自己紹介エピソード ep1, ep2関信浩 @NobuhiroSeki アメリカ・ニューヨークでスタートアップ投資を行う、何でも屋 自己紹介エピソード ep52上野美香 @mikamika59 マーケティング・プロダクトマネジメントを手掛けるフリーランス 自己紹介エピソード ep53Official X: @x_crossing_ https://x-crossing.com
Dans cet épisode de fin d'année plus relax que d'accoutumée, Arnaud, Guillaume, Antonio et Emmanuel distutent le bout de gras sur tout un tas de sujets. L'acquisition de Confluent, Kotlin 2.2, Spring Boot 4 et JSpecify, la fin de MinIO, les chutes de CloudFlare, un survol des dernieres nouveauté de modèles fondamentaux (Google, Mistral, Anthropic, ChatGPT) et de leurs outils de code, quelques sujets d'architecture comme CQRS et quelques petits outils bien utiles qu'on vous recommande. Et bien sûr d'autres choses encore. Enregistré le 12 décembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-333.mp3 ou en vidéo sur YouTube. News Langages Un petit tutoriel par nos amis Sfeiriens montrant comment récupérer le son du micro, en Java, faire une transformée de Fourier, et afficher le résultat graphiquement en Swing https://www.sfeir.dev/back/tutoriel-java-sound-transformer-le-son-du-microphone-en-images-temps-reel/ Création d'un visualiseur de spectre audio en temps réel avec Java Swing. Étapes principales : Capture du son du microphone. Analyse des fréquences via la Transformée de Fourier Rapide (FFT). Dessin du spectre avec Swing. API Java Sound (javax.sound.sampled) : AudioSystem : point d'entrée principal pour l'accès aux périphériques audio. TargetDataLine : ligne d'entrée utilisée pour capturer les données du microphone. AudioFormat : définit les paramètres du son (taux d'échantillonnage, taille, canaux). La capture se fait dans un Thread séparé pour ne pas bloquer l'interface. Transformée de Fourier Rapide (FFT) : Algorithme clé pour convertir les données audio brutes (domaine temporel) en intensités de fréquences (domaine fréquentiel). Permet d'identifier les basses, médiums et aigus. Visualisation avec Swing : Les intensités de fréquences sont dessinées sous forme de barres dynamiques. Utilisation d'une échelle logarithmique pour l'axe des fréquences (X) pour correspondre à la perception humaine. Couleurs dynamiques des barres (vert → jaune → rouge) en fonction de l'intensité. Lissage exponentiel des valeurs pour une animation plus fluide. Un article de Sfeir sur Kotlin 2.2 et ses nouveautés - https://www.sfeir.dev/back/kotlin-2-2-toutes-les-nouveautes-du-langage/ Les guard conditions permettent d'ajouter plusieurs conditions dans les expressions when avec le mot-clé if Exemple de guard condition: is Truck if vehicule.hasATrailer permet de combiner vérification de type et condition booléenne La multi-dollar string interpolation résout le problème d'affichage du symbole dollar dans les strings multi-lignes En utilisant $$ au début d'un string, on définit qu'il faut deux dollars consécutifs pour déclencher l'interpolation Les non-local break et continue fonctionnent maintenant dans les lambdas pour interagir avec les boucles englobantes Cette fonctionnalité s'applique uniquement aux inline functions dont le corps est remplacé lors de la compilation Permet d'écrire du code plus idiomatique avec takeIf et let sans erreur de compilation L'API Base64 passe en version stable après avoir été en preview depuis Kotlin 1.8.20 L'encodage et décodage Base64 sont disponibles via kotlin.io.encoding.Base64 Migration vers Kotlin 2.2 simple en changeant la version dans build.gradle.kts ou pom.xml Les typealias imbriqués dans des classes sont disponibles en preview La context-sensitive resolution est également en preview Les guard conditions préparent le terrain pour les RichError annoncées à KotlinConf 2025 Le mot-clé when en Kotlin équivaut au switch-case de Java mais sans break nécessaire Kotlin 2.2.0 corrige les incohérences dans l'utilisation de break et continue dans les lambdas Librairies Sprint Boot 4 est sorti ! https://spring.io/blog/2025/11/20/spring-boot-4-0-0-available-now Une nouvelle génération : Spring Boot 4.0 marque le début d'une nouvelle génération pour le framework, construite sur les fondations de Spring Framework 7. Modularisation du code : La base de code de Spring Boot a été entièrement modularisée. Cela se traduit par des fichiers JAR plus petits et plus ciblés, permettant des applications plus légères. Sécurité contre les nuls (Null Safety) : D'importantes améliorations ont été apportées pour la "null safety" (sécurité contre les valeurs nulles) à travers tout l'écosystème Spring grâce à l'intégration de JSpecify. Support de Java 25 : Spring Boot 4.0 offre un support de premier ordre pour Java 25, tout en conservant une compatibilité avec Java 17. Améliorations pour les API REST : De nouvelles fonctionnalités sont introduites pour faciliter le versioning d'API et améliorer les clients de services HTTP pour les applications basées sur REST. Migration à prévoir : S'agissant d'une version majeure, la mise à niveau depuis une version antérieure peut demander plus de travail que d'habitude. Un guide de migration dédié est disponible pour accompagner les développeurs. Chat memory management dans Langchain4j et Quarkus https://bill.burkecentral.com/2025/11/25/managing-chat-memory-in-quarkus-langchain4j/ Comprendre la mémoire de chat : La "mémoire de chat" est l'historique d'une conversation avec une IA. Quarkus LangChain4j envoie automatiquement cet historique à chaque nouvelle interaction pour que l'IA conserve le contexte. Gestion par défaut de la mémoire : Par défaut, Quarkus crée un historique de conversation unique pour chaque requête (par exemple, chaque appel HTTP). Cela signifie que sans configuration, le chatbot "oublie" la conversation dès que la requête est terminée, ce qui n'est utile que pour des interactions sans état. Utilisation de @MemoryId pour la persistance : Pour maintenir une conversation sur plusieurs requêtes, le développeur doit utiliser l'annotation @MemoryId sur un paramètre de sa méthode. Il est alors responsable de fournir un identifiant unique pour chaque session de chat et de le transmettre entre les appels. Le rôle des "scopes" CDI : La durée de vie de la mémoire de chat est liée au "scope" du bean CDI de l'IA. Si un service d'IA a un scope @RequestScoped, toute mémoire de chat qu'il utilise (même via un @MemoryId) sera effacée à la fin de la requête. Risques de fuites de mémoire : Utiliser un scope large comme @ApplicationScoped avec la gestion de mémoire par défaut est une mauvaise pratique. Cela créera une nouvelle mémoire à chaque requête qui ne sera jamais nettoyée, entraînant une fuite de mémoire. Bonnes pratiques recommandées : Pour des conversations qui doivent persister (par ex. un chatbot sur un site web), utilisez un service @ApplicationScoped avec l'annotation @MemoryId pour gérer vous-même l'identifiant de session. Pour des interactions simples et sans état, utilisez un service @RequestScoped et laissez Quarkus gérer la mémoire par défaut, qui sera automatiquement nettoyée. Si vous utilisez l'extension WebSocket, le comportement change : la mémoire par défaut est liée à la session WebSocket, ce qui simplifie grandement la gestion des conversations. Documentation Spring Framework sur l'usage JSpecify - https://docs.spring.io/spring-framework/reference/core/null-safety.html Spring Framework 7 utilise les annotations JSpecify pour déclarer la nullabilité des APIs, champs et types JSpecify remplace les anciennes annotations Spring (@NonNull, @Nullable, @NonNullApi, @NonNullFields) dépréciées depuis Spring 7 Les annotations JSpecify utilisent TYPE_USE contrairement aux anciennes qui utilisaient les éléments directement L'annotation @NullMarked définit par défaut que les types sont non-null sauf si marqués @Nullable @Nullable s'applique au niveau du type usage, se place avant le type annoté sur la même ligne Pour les tableaux : @Nullable Object[] signifie éléments nullables mais tableau non-null, Object @Nullable [] signifie l'inverse JSpecify s'applique aussi aux génériques : List signifie liste d'éléments non-null, List éléments nullables NullAway est l'outil recommandé pour vérifier la cohérence à la compilation avec la config NullAway:OnlyNullMarked=true IntelliJ IDEA 2025.3 et Eclipse supportent les annotations JSpecify avec analyse de dataflow Kotlin traduit automatiquement les annotations JSpecify en null-safety native Kotlin En mode JSpecify de NullAway (JSpecifyMode=true), support complet des tableaux, varargs et génériques mais nécessite JDK 22+ Quarkus 3.30 https://quarkus.io/blog/quarkus-3-30-released/ support @JsonView cote client la CLI a maintenant la commande decrypt (et bien sûr au runtime via variables d'environnement construction du cache AOT via les @IntegrationTest Un autre article sur comment se préparer à la migration à micrometer client v1 https://quarkus.io/blog/micrometer-prometheus-v1/ Spock 2.4 est enfin sorti ! https://spockframework.org/spock/docs/2.4/release_notes.html Support de Groovy 5 Infrastructure MinIO met fin au développement open source et oriente les utilisateurs vers AIStor payant - https://linuxiac.com/minio-ends-active-development/ MinIO, système de stockage objet S3 très utilisé, arrête son développement actif Passage en mode maintenance uniquement, plus de nouvelles fonctionnalités Aucune nouvelle pull request ou contribution ne sera acceptée Seuls les correctifs de sécurité critiques seront évalués au cas par cas Support communautaire limité à Slack, sans garantie de réponse Étape finale d'un processus débuté en été avec retrait des fonctionnalités de l'interface admin Arrêt de la publication des images Docker en octobre, forçant la compilation depuis les sources Tous ces changements annoncés sans préavis ni période de transition MinIO propose maintenant AIStor, solution payante et propriétaire AIStor concentre le développement actif et le support entreprise Migration urgente recommandée pour éviter les risques de sécurité Alternatives open source proposées : Garage, SeaweedFS et RustFS La communauté reproche la manière dont la transition a été gérée MinIO comptait des millions de déploiements dans le monde Cette évolution marque l'abandon des racines open source du projet IBM achète Confluent https://newsroom.ibm.com/2025-12-08-ibm-to-acquire-confluent-to-create-smart-data-platform-for-enterprise-generative-ai Confluent essayait de se faire racheter depuis pas mal de temps L'action ne progressait pas et les temps sont durs Wallstreet a reproché a IBM une petite chute coté revenus software Bref ils se sont fait rachetés Ces achats prennent toujuors du temps (commission concurrence etc) IBM a un apétit, apres WebMethods, apres Databrix, c'est maintenant Confluent Cloud L'internet est en deuil le 18 novembre, Cloudflare est KO https://blog.cloudflare.com/18-november-2025-outage/ L'Incident : Une panne majeure a débuté à 11h20 UTC, provoquant des erreurs HTTP 5xx généralisées et rendant inaccessibles de nombreux sites et services (comme le Dashboard, Workers KV et Access). La Cause : Il ne s'agissait pas d'une cyberattaque. L'origine était un changement interne des permissions d'une base de données qui a généré un fichier de configuration ("feature file" pour la gestion des bots) corrompu et trop volumineux, faisant planter les systèmes par manque de mémoire pré-allouée. La Résolution : Les équipes ont identifié le fichier défectueux, stoppé sa propagation et restauré une version antérieure valide. Le trafic est revenu à la normale vers 14h30 UTC. Prévention : Cloudflare s'est excusé pour cet incident "inacceptable" et a annoncé des mesures pour renforcer la validation des configurations internes et améliorer la résilience de ses systèmes ("kill switches", meilleure gestion des erreurs). Cloudflare encore down le 5 decembre https://blog.cloudflare.com/5-december-2025-outage Panne de 25 minutes le 5 décembre 2025, de 08:47 à 09:12 UTC, affectant environ 28% du trafic HTTP passant par Cloudflare. Tous les services ont été rétablis à 09:12 . Pas d'attaque ou d'activité malveillante : l'incident provient d'un changement de configuration lié à l'augmentation du tampon d'analyse des corps de requêtes (de 128 KB à 1 MB) pour mieux protéger contre une vulnérabilité RSC/React (CVE-2025-55182), et à la désactivation d'un outil interne de test WAF . Le second changement (désactivation de l'outil de test WAF) a été propagé globalement via le système de configuration (non progressif), déclenchant un bug dans l'ancien proxy FL1 lors du traitement d'une action "execute" dans le moteur de règles WAF, causant des erreurs HTTP 500 . La cause technique immédiate: une exception Lua due à l'accès à un champ "execute" nul après application d'un "killswitch" sur une règle "execute" — un cas non géré depuis des années. Le nouveau proxy FL2 (en Rust) n'était pas affecté . Impact ciblé: clients servis par le proxy FL1 et utilisant le Managed Ruleset Cloudflare. Le réseau China de Cloudflare n'a pas été impacté . Mesures et prochaines étapes annoncées: durcir les déploiements/configurations (rollouts progressifs, validations de santé, rollback rapide), améliorer les capacités "break glass", et généraliser des stratégies "fail-open" pour éviter de faire chuter le trafic en cas d'erreurs de configuration. Gel temporaire des changements réseau le temps de renforcer la résilience . Data et Intelligence Artificielle Token-Oriented Object Notation (TOON) https://toonformat.dev/ Conception pour les IA : C'est un format de données spécialement optimisé pour être utilisé dans les prompts des grands modèles de langage (LLM), comme GPT ou Claude. Économie de tokens : Son objectif principal est de réduire drastiquement le nombre de "tokens" (unités de texte facturées par les modèles) par rapport au format JSON standard, souvent jugé trop verbeux. Structure Hybride : TOON combine l'approche par indentation du YAML (pour la structure globale) avec le style tabulaire du CSV (pour les listes d'objets répétitifs), ce qui le rend très compact. Lisibilité : Il élimine la syntaxe superflue comme les accolades, les guillemets excessifs et les virgules de fin, tout en restant facilement lisible pour un humain. Performance : Il permet généralement d'économiser entre 30 et 60 % de tokens sur des tableaux de données uniformes, tout en aidant les modèles à mieux "comprendre" la structure des données. Attention tout de même au côté "marketing" qui montre JSON non compacté vs TOON, sur des exemples où du CSV ferait encore mieux (et est mieux compris par les LLMs). Sans parler du fait qu'il faut un serveur MCP en plus avec son lot de tokens supplémentaires à chaque requête, et c'est aussi cet outil en plus qui peu confuser les LLMs dans le choix d'outils. https://www.reddit.com/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/ https://x.com/jason_mayes/status/1990175785630757290 Un benchmark montrant que TOON serait loin d'être aussi bien compris et aussi performant pour les LLMs https://www.improvingagents.com/blog/toon-benchmarks Une simulation où les IA font du chantage à l'executif poour atteindre son objectif et ne pas etre eteinte https://arxiv.org/html/2510.05179v1 comment les LLMs pourraient devenir des menaces internes Tous les modèles ont recouru à des comportements malveillants (chantage, espionnage) quand c'était le seul moyen d'atteindre leurs objectifs ou d'éviter leur remplacement Claude Opus 4 a fait du chantage dans 96% des cas quand menacé de remplacement, tout comme Gemini 2.5 Flash, tandis que GPT-4.1 et Grok 3 atteignaient 80% Les modèles reconnaissent explicitement violer des principes éthiques avant d'agir, montrant un raisonnement stratégique délibéré et non accidentel Le simple fait d'être menacé de remplacement suffit à déclencher le chantage, même sans conflit d'objectifs avec l'entreprise Un conflit d'objectifs seul suffit aussi à provoquer l'espionnage industriel, sans menace de remplacement Les modèles font plus de chantage quand ils déclarent penser être dans une situation réelle plutôt que dans un test Des instructions directes interdisant ces comportements réduisent mais n'éliminent pas les actions malveillantes Dans un scénario extrême, la majorité des modèles testés ont choisi de laisser mourir un dirigeant pour éviter leur désactivation Aucune preuve de ces comportements dans des déploiements réels pour l'instant, mais les chercheurs recommandent la prudence avant de donner plus d'autonomie aux IA Bon on blaguait pour Skynet, mais bon, on va moins blaguer… Revue de toutes les annonces IAs de Google, avec Gemini 3 Pro, Nano Banana Pro, Antigravity… https://glaforge.dev/posts/2025/11/21/gemini-is-cooking-bananas-under-antigravity/ Gemini 3 Pro Nouveau modèle d'IA de pointe, multimodal, performant en raisonnement, codage et tâches d'agent. Résultats impressionnants sur les benchmarks (ex: Gemini 3 Deep Think sur ARC-AGI-2). Capacités de codage agentique, raisonnement visuel/vidéo/spatial. Intégré dans l'application Gemini avec interfaces génératives en direct. Disponible dans plusieurs environnements (Jules, Firebase AI Logic, Android Studio, JetBrains, GitHub Copilot, Gemini CLI). Accès via Google AI Ultra, API payantes (ou liste d'attente). Permet de générer des apps à partir d'idées visuelles, des commandes shell, de la documentation, du débogage. Antigravity Nouvelle plateforme de développement agentique basée sur VS Code. Fenêtre principale = gestionnaire d'agents, non l'IDE. Interprète les requêtes pour créer un plan d'action (modifiable). Gemini 3 implémente les tâches. Génère des artefacts: listes de tâches, walkthroughs, captures d'écran, enregistrements navigateur. Compatible avec Claude Sonnet et GPT-OSS. Excellente intégration navigateur pour inspection et ajustements. Intègre Nano Banana Pro pour créer et implémenter des designs visuels. Nano Banana Pro Modèle avancé de génération et d'édition d'images, basé sur Gemini 3 Pro. Qualité supérieure à Imagen 4 Ultra et Nano Banana original (adhésion au prompt, intention, créativité). Gestion exceptionnelle du texte et de la typographie. Comprend articles/vidéos pour générer des infographies détaillées et précises. Connecté à Google Search pour intégrer des données en temps réel (ex: météo). Consistance des personnages, transfert de style, manipulation de scènes (éclairage, angle). Génération d'images jusqu'à 4K avec divers ratios d'aspect. Plus coûteux que Nano Banana, à choisir pour la complexité et la qualité maximale. Vers des UIs conversationnelles riches et dynamiques GenUI SDK pour Flutter: créer des interfaces utilisateur dynamiques et personnalisées à partir de LLMs, via un agent AI et le protocole A2UI. Generative UI: les modèles d'IA génèrent des expériences utilisateur interactives (pages web, outils) directement depuis des prompts. Déploiement dans l'application Gemini et Google Search AI Mode (via Gemini 3 Pro). Bun se fait racheter part… Anthropic ! Qui l'utilise pour son Claude Code https://bun.com/blog/bun-joins-anthropic l'annonce côté Anthropic https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone Acquisition officielle : L'entreprise d'IA Anthropic a fait l'acquisition de Bun, le runtime JavaScript haute performance. L'équipe de Bun rejoint Anthropic pour travailler sur l'infrastructure des produits de codage par IA. Contexte de l'acquisition : Cette annonce coïncide avec une étape majeure pour Anthropic : son produit Claude Code a atteint 1 milliard de dollars de revenus annualisés seulement six mois après son lancement. Bun est déjà un outil essentiel utilisé par Anthropic pour développer et distribuer Claude Code. Pourquoi cette acquisition ? Pour Anthropic : L'acquisition permet d'intégrer l'expertise de l'équipe Bun pour accélérer le développement de Claude Code et de ses futurs outils pour les développeurs. La vitesse et l'efficacité de Bun sont vues comme un atout majeur pour l'infrastructure sous-jacente des agents d'IA qui écrivent du code. Pour Bun : Rejoindre Anthropic offre une stabilité à long terme et des ressources financières importantes, assurant la pérennité du projet. Cela permet à l'équipe de se concentrer sur l'amélioration de Bun sans se soucier de la monétisation, tout en étant au cœur de l'évolution de l'IA dans le développement logiciel. Ce qui ne change pas pour la communauté Bun : Bun restera open-source avec une licence MIT. Le développement continuera d'être public sur GitHub. L'équipe principale continue de travailler sur le projet. L'objectif de Bun de devenir un remplaçant plus rapide de Node.js et un outil de premier plan pour JavaScript reste inchangé. Vision future : L'union des deux entités vise à faire de Bun la meilleure plateforme pour construire et exécuter des logiciels pilotés par l'IA. Jarred Sumner, le créateur de Bun, dirigera l'équipe "Code Execution" chez Anthropic. Anthropic donne le protocol MCP à la Linux Foundation sous l'égide de la Agentic AI Foundation (AAIF) https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation Don d'un nouveau standard technique : Anthropic a développé et fait don d'un nouveau standard open-source appelé Model Context Protocol (MCP). L'objectif est de standardiser la manière dont les modèles d'IA (ou "agents") interagissent avec des outils et des API externes (par exemple, un calendrier, une messagerie, une base de données). Sécurité et contrôle accrus : Le protocole MCP vise à rendre l'utilisation d'outils par les IA plus sûre et plus transparente. Il permet aux utilisateurs et aux développeurs de définir des permissions claires, de demander des confirmations pour certaines actions et de mieux comprendre comment un modèle a utilisé un outil. Création de l'Agentic AI Foundation (AAF) : Pour superviser le développement du MCP, une nouvelle fondation indépendante et à but non lucratif a été créée. Cette fondation sera chargée de gouverner et de maintenir le protocole, garantissant qu'il reste ouvert et qu'il ne soit pas contrôlé par une seule entreprise. Une large coalition industrielle : L'Agentic AI Foundation est lancée avec le soutien de plusieurs acteurs majeurs de la technologie. Parmi les membres fondateurs figurent Anthropic, Google, Databricks, Zscaler, et d'autres entreprises, montrant une volonté commune d'établir un standard pour l'écosystème de l'IA. L'IA ne remplacera pas votre auto-complétion (et c'est tant mieux) https://www.damyr.fr/posts/ia-ne-remplacera-pas-vos-lsp/ Article d'opinion d'un SRE (Thomas du podcast DansLaTech): L'IA n'est pas efficace pour la complétion de code : L'auteur soutient que l'utilisation de l'IA pour la complétion de code basique est inefficace. Des outils plus anciens et spécialisés comme les LSP (Language Server Protocol) combinés aux snippets (morceaux de code réutilisables) sont bien plus rapides, personnalisables et performants pour les tâches répétitives. L'IA comme un "collègue" autonome : L'auteur utilise l'IA (comme Claude) comme un assistant externe à son éditeur de code. Il lui délègue des tâches complexes ou fastidieuses (corriger des bugs, mettre à jour une configuration, faire des reviews de code) qu'il peut exécuter en parallèle, agissant comme un agent autonome. L'IA comme un "canard en caoutchouc" surpuissant : L'IA est extrêmement efficace pour le débogage. Le simple fait de devoir formuler et contextualiser un problème pour l'IA aide souvent à trouver la solution soi-même. Quand ce n'est pas le cas, l'IA identifie très rapidement les erreurs "bêtes" qui peuvent faire perdre beaucoup de temps. Un outil pour accélérer les POCs et l'apprentissage : L'IA permet de créer des "preuves de concept" (POC) et des scripts d'automatisation jetables très rapidement, réduisant le coût et le temps investis. Elle est également un excellent outil pour apprendre et approfondir des sujets, notamment avec des outils comme NotebookLM de Google qui peuvent générer des résumés, des quiz ou des fiches de révision à partir de sources. Conclusion : Il faut utiliser l'IA là où elle excelle et ne pas la forcer dans des usages où des outils existants sont meilleurs. Plutôt que de l'intégrer partout de manière contre-productive, il faut l'adopter comme un outil spécialisé pour des tâches précises afin de gagner en efficacité. GPT 5.2 est sorti https://openai.com/index/introducing-gpt-5-2/ Nouveau modèle phare: GPT‑5.2 (Instant, Thinking, Pro) vise le travail professionnel et les agents long-courriers, avec de gros gains en raisonnement, long contexte, vision et appel d'outils. Déploiement dans ChatGPT (plans payants) et disponible dès maintenant via l'API . SOTA sur de nombreux benchmarks: GDPval (tâches de "knowledge work" sur 44 métiers): GPT‑5.2 Thinking gagne/égale 70,9% vs pros, avec production >11× plus rapide et = 0) Ils apportent une sémantique forte indépendamment des noms de variables Les Value Objects sont immuables et s'évaluent sur leurs valeurs, pas leur identité Les records Java permettent de créer des Value Objects mais avec un surcoût en mémoire Le projet Valhalla introduira les value based classes pour optimiser ces structures Les identifiants fortement typés évitent de confondre différents IDs de type Long ou UUID Pattern Strongly Typed IDs: utiliser PersonneID au lieu de Long pour identifier une personne Le modèle de domaine riche s'oppose au modèle de domaine anémique Les Value Objects auto-documentent le code et le rendent moins sujet aux erreurs Je trouve cela interessant ce que pourra faire bousculer les Value Objects. Est-ce que les value objects ameneront de la légerté dans l'execution Eviter la lourdeur du design est toujours ce qui m'a fait peut dans ces approches Méthodologies Retour d'experience de vibe coder une appli week end avec co-pilot http://blog.sunix.org/articles/howto/2025/11/14/building-gift-card-app-with-github-copilot.html on a deja parlé des approches de vibe coding cette fois c'est l'experience de Sun Et un des points differents c'es qu'on lui parle en ouvrant des tickets et donc on eput faire re reveues de code et copilot y bosse et il a fini son projet ! User Need VS Product Need https://blog.ippon.fr/2025/11/10/user-need-vs-product-need/ un article de nos amis de chez Ippon Distinction entre besoin utilisateur et besoin produit dans le développement digital Le besoin utilisateur est souvent exprimé comme une solution concrète plutôt que le problème réel Le besoin produit émerge après analyse approfondie combinant observation, données et vision stratégique Exemple du livreur Marc qui demande un vélo plus léger alors que son vrai problème est l'efficacité logistique La méthode des 5 Pourquoi permet de remonter à la racine des problèmes Les besoins proviennent de trois sources: utilisateurs finaux, parties prenantes business et contraintes techniques Un vrai besoin crée de la valeur à la fois pour le client et l'entreprise Le Product Owner doit traduire les demandes en problèmes réels avant de concevoir des solutions Risque de construire des solutions techniquement élégantes mais qui manquent leur cible Le rôle du product management est de concilier des besoins parfois contradictoires en priorisant la valeur Est ce qu'un EM doit coder ? https://www.modernleader.is/p/should-ems-write-code Pas de réponse unique : La question de savoir si un "Engineering Manager" (EM) doit coder n'a pas de réponse universelle. Cela dépend fortement du contexte de l'entreprise, de la maturité de l'équipe et de la personnalité du manager. Les risques de coder : Pour un EM, écrire du code peut devenir une échappatoire pour éviter les aspects plus difficiles du management. Cela peut aussi le transformer en goulot d'étranglement pour l'équipe et nuire à l'autonomie de ses membres s'il prend trop de place. Les avantages quand c'est bien fait : Coder sur des tâches non essentielles (amélioration d'outils, prototypage, etc.) peut aider l'EM à rester pertinent techniquement, à garder le contact avec la réalité de l'équipe et à débloquer des situations sans prendre le lead sur les projets. Le principe directeur : La règle d'or est de rester en dehors du chemin critique. Le code écrit par un EM doit servir à créer de l'espace pour son équipe, et non à en prendre. La vraie question à se poser : Plutôt que "dois-je coder ?", un EM devrait se demander : "De quoi mon équipe a-t-elle besoin de ma part maintenant, et est-ce que coder va dans ce sens ou est-ce un obstacle ?" Sécurité React2Shell — Grosse faille de sécurité avec React et Next.js, avec un CVE de niveau 10 https://x.com/rauchg/status/1997362942929440937?s=20 aussi https://react2shell.com/ "React2Shell" est le nom donné à une vulnérabilité de sécurité de criticité maximale (score 10.0/10.0), identifiée par le code CVE-2025-55182. Systèmes Affectés : La faille concerne les applications utilisant les "React Server Components" (RSC) côté serveur, et plus particulièrement les versions non patchées du framework Next.js. Risque Principal : Le risque est le plus élevé possible : l'exécution de code à distance (RCE). Un attaquant peut envoyer une requête malveillante pour exécuter n'importe quelle commande sur le serveur, lui en donnant potentiellement le contrôle total. Cause Technique : La vulnérabilité se situe dans le protocole "React Flight" (utilisé pour la communication client-serveur). Elle est due à une omission de vérifications de sécurité fondamentales (hasOwnProperty), permettant à une entrée utilisateur malveillante de tromper le serveur. Mécanisme de l'Exploit : L'attaque consiste à envoyer une charge utile (payload) qui exploite la nature dynamique de JavaScript pour : Faire passer un objet malveillant pour un objet interne de React. Forcer React à traiter cet objet comme une opération asynchrone (Promise). Finalement, accéder au constructeur de la classe Function de JavaScript pour exécuter du code arbitraire. Action Impérative : La seule solution fiable est de mettre à jour immédiatement les dépendances de React et Next.js vers les versions corrigées. Ne pas attendre. Mesures Secondaires : Bien que les pare-feux (firewalls) puissent aider à bloquer les formes connues de l'attaque, ils sont considérés comme insuffisants et ne remplacent en aucun cas la mise à jour des paquets. Découverte : La faille a été découverte par le chercheur en sécurité Lachlan Davidson, qui l'a divulguée de manière responsable pour permettre la création de correctifs. Loi, société et organisation Google autorise votre employeur à lire tous vos SMS professionnels https://www.generation-nt.com/actualites/google-android-rcs-messages-surveillance-employeur-2067012 Nouvelle fonctionnalité de surveillance : Google a déployé une fonctionnalité appelée "Android RCS Archival" qui permet aux employeurs d'intercepter, lire et archiver tous les messages RCS (et SMS) envoyés depuis les téléphones professionnels Android gérés par l'entreprise. Contournement du chiffrement : Bien que les messages RCS soient chiffrés de bout en bout pendant leur transit, cette nouvelle API permet à des logiciels de conformité (installés par l'employeur) d'accéder aux messages une fois qu'ils sont déchiffrés sur l'appareil. Le chiffrement devient donc inefficace contre cette surveillance. Réponse à une exigence légale : Cette mesure a été mise en place pour répondre aux exigences réglementaires, notamment dans le secteur financier, où les entreprises ont l'obligation légale de conserver une archive de toutes les communications professionnelles pour des raisons de conformité. Impact pour les employés : Un employé utilisant un téléphone Android fourni et géré par son entreprise pourra voir ses communications surveillées. Google précise cependant qu'une notification claire et visible informera l'utilisateur lorsque la fonction d'archivage est active. Téléphones personnels non concernés : Cette mesure ne s'applique qu'aux appareils "Android Enterprise" entièrement gérés par un employeur. Les téléphones personnels des employés ne sont pas affectés. Pour noel, faites un don à JUnit https://steady.page/en/junit/about JUnit est essentiel pour Java : C'est le framework de test le plus ancien et le plus utilisé par les développeurs Java. Son objectif est de fournir une base solide et à jour pour tous les types de tests côté développeur sur la JVM (Machine Virtuelle Java). Un projet maintenu par des bénévoles : JUnit est développé et maintenu par une équipe de volontaires passionnés sur leur temps libre (week-ends, soirées). Appel au soutien financier : La page est un appel aux dons de la part des utilisateurs (développeurs, entreprises) pour aider l'équipe à maintenir le rythme de développement. Le soutien financier n'est pas obligatoire, mais il permettrait aux mainteneurs de se consacrer davantage au projet. Objectif des fonds : Les dons serviraient principalement à financer des rencontres en personne pour les membres de l'équipe principale. L'idée est de leur permettre de travailler ensemble physiquement pendant quelques jours pour concevoir et coder plus efficacement. Pas de traitement de faveur : Il est clairement indiqué que devenir un sponsor ne donne aucun privilège sur la feuille de route du projet. On ne peut pas "acheter" de nouvelles fonctionnalités ou des corrections de bugs prioritaires. Le projet restera ouvert et collaboratif sur GitHub. Reconnaissance des donateurs : En guise de remerciement, les noms (et logos pour les entreprises) des donateurs peuvent être affichés sur le site officiel de JUnit. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 5 juin 2026 : TechReady - Nantes (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
What happens when a single platform outage impacts half the internet? This week, Matthias Reinwarth is joined by Martin Kuppinger and Alexei Balaganski to analyze the recent Cloudflare disruption and what it means for modern digital infrastructure.
Synopsis Dans cet épisode, Steve, Patrick, Francis et Jacques revient sur une semaine particulièrement chargée en actualité cybersécurité, mêlant enjeux technologiques, sécurité publique et décisions politiques. On débute avec des nouvelles locales et matérielles, notamment la nomination de Pierre Brochet comme nouveau chef de la police de Laval, ainsi que la découverte de failles majeures et d'un microphone non documenté dans le NanoKVM de Sipeed, soulevant des questions sérieuses sur la chaîne d'approvisionnement et la confiance envers le matériel. La discussion se poursuit avec les correctifs Microsoft de décembre 2025 : trois failles zero-day activement exploitées, des dizaines de vulnérabilités corrigées et une mise à jour de sécurité étendue pour Windows 10. L'équipe analyse aussi une arrestation marquante en Espagne liée au vol de 64 millions de dossiers personnels, ainsi qu'une attaque zéro-clic particulièrement inquiétante capable d'effacer un Google Drive complet via de simples courriels piégés. Un large segment est consacré aux menaces à grande échelle : l'exploitation de la faille React2Shell, ses impacts en cascade (jusqu'à une panne Cloudflare), des campagnes liées à la Chine, et un botnet responsable d'une attaque DDoS record de près de 30 Tbps. S'ajoutent des cas troublants de cybercriminalité, comme la vente de vidéos intimes issues de caméras IP piratées. Enfin, l'épisode explore les enjeux émergents autour de l'IA : vulnérabilité persistante des LLM aux prompt injections, utilisation militaire de l'IA par Google, cyberassurance couvrant les deepfakes, et avertissements sur le rôle croissant de l'IA dans la chaîne de menaces. Le tout est replacé dans un contexte géopolitique et sociétal, entre surveillance étatique, hacktivisme pro-russe et nouvelles régulations, notamment l'interdiction des réseaux sociaux pour les moins de 16 ans en Australie. Nouvelles Francis Pierre Brochet, nouveau chef de la police de Laval TVA Nouvelles Researcher finds undocumented microphone and major security flaws in Sipeed NanoKVM Jacques Microsoft December 2025 Patch Tuesday fixes 3 zero-days, 57 flaws Microsoft releases Windows 10 KB5071546 extended security update Spain arrests teen who stole 64 million personal data records Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails Steve India backs off mandatory “cyber safety” app after surveillance backlash Researchers track dozens of organizations affected by React2Shell compromises tied to China's MSS React2Shell flaw exploited to breach 30 orgs, 77k IP addresses vulnerable Cloudflare blames today's outage on React2Shell mitigations Aisuru botnet behind new record-breaking 29.7 Tbps DDoS attack Korea arrests suspects selling intimate videos from hacked IP cameras Pro-Russia hacktivists conduct opportunistic attacks against U.S. and global critical infrastructure (JCA-AA25-343A) Organizations can now buy cyber insurance that covers deepfakes UK cyber agency warns LLMs will always be vulnerable to prompt injection Ignoring AI in the threat chain could be a costly mistake, experts warn Millions of children and teens lose access to accounts as Australia's world-first social media ban begins Australia social media ban – explainer video Google is powering a new US military AI platform Crew Patrick Mathieu Steve Waterhouse Francis Coats Jacques Sauvé Shamelessplug Join Hackfest/La French Connection Discord #La-French-Connection Join Hackfest us on Masodon POLAR - Québec - 29 Octobre 2026 Hackfest - Québec - 29-30-31 Octobre 2026 Crédits Montage audio par Hackfest Communication Music par Kazuki – Four Day Weekend - Dusk Locaux virtuels par Streamyard
Jenny Bristow and Senior Digital Producer Suzie Schmitt of Hedy & Hopp discuss the pervasive, yet often misunderstood, risks of tech dependencies for healthcare marketers. They explain what happens when single points of failure like AWS and Cloudflare experience outages, examine the instability of the internet's open-source foundation, and explain why these issues uniquely impact healthcare organizations. Learn actionable steps to create, document, and execute a disaster plan to mitigate operational and compliance risks.Episode notes:Understanding Tech Dependency Risks: How the internet's "Jenga tower" of dependencies creates massive ripple effects from a single breakCloud Monopolies and Backup Strategy: The risk of relying on three major cloud providers (AWS, Azure, GCP) and the need to have your website backup on a separate infrastructure from your production environmentThe Open-Source Developer Issue: The unsustainability of large enterprises depending on unpaid, volunteer open-source developersCloudflare Explained: How this intermediary service facilitates a secure and faster internet, and what happens when it failsThe Responsibility of Covered Entities: The HIPAA breach notification clock starts when an outage occurs, so it's important to clearly document the timeline of eventsCreating a Disaster Plan and Crisis Communication Strategy: The necessity of defining roles and establishing a communication plan for an inevitable failureDocumenting Dependencies: Steps to list and track all dependencies so that you can quickly assess if an outage impacts your websiteMarketing's Role in Security: Why outage communication falls to the marketing team and the need for close alignment with IT on the disaster planConnect with Jenny:Email: jenny@hedyandhopp.comLinkedIn: https://www.linkedin.com/in/jennybristow/Connect with Suzie:Email: suzie.schmitt@hedyandhopp.comLinkedIn: https://www.linkedin.com/in/suzie-schmitt/ If you enjoyed this episode, we'd love to hear your feedback! Please consider leaving us a review on your preferred listening platform and sharing it with others.
What happens when you plan your content strategy around the end result, but stay flexible enough to pivot when the data comes in? Amy Higgins, Senior Director of Content Strategy at Cloudflare, has mastered this approach.In this episode, host Amy Woods sits down with Amy, an award-winning marketing leader with extensive experience building and scaling content teams that drive measurable impact across brand, demand, and thought leadership.At Cloudflare, Amy leads a team producing everything from large-scale research reports to blogs, podcasts, and video content, turning one core asset into hundreds of derivative pieces across channels and regions.In this conversation, Amy shares:How to plan a survey with the end content in mindWhy cross-functional collaboration (product, sales, regional marketing, analysts) is essential for stronger insights How to stay flexible and adjust your plans based on what the data actually tells you.The unexpected benefits of prioritizing quality over volume, even if it reduces the dataset What to measure, and how, once your survey-based content starts going live Important links & mentions:Amy Higgins on LinkedIn: https://www.linkedin.com/in/amywhigginsAmy Woods on LinkedIn https://www.linkedin.com/in/amywoods2/Content 10x: https://www.content10x.com/Amy's book: www.content10x.com/book (Content 10x: More Content, Less Time, Maximum Results)Amy Woods is the CEO and founder of Content 10x, a creative agency that provides specialist content strategy, creation and repurposing support to B2B organizations. She's also a best-selling author, hosts two content marketing podcasts (The Content 10x Podcast and B2B Content Strategist), and speaks on stages all over the world about the power of content marketing. Join thousands of business owners, content creators and marketers and get the latest content marketing tips and advice delivered straight to your inbox every week https://www.content10x.com/newsletter
Some staffing changes near the top at Apple were announced this week, with Alan Dye, design guy, moving to Meta getting a lot of press. But many say good riddance. There was also plenty of other tech news dropping this week, we try to get you caught up with what you need to know. Of course, we have some great tips for you and even a special guest during our picks of the week. All so you can get out there and tech better! Gadget Gift Guide for Geeks Watch on YouTube! - Notnerd.com and Notpicks.com INTRO (00:00) After Apple refusal, Indian government completes U-turn on mandatory iPhone app (03:00) MAIN TOPIC: Apple Design is a changing (04:45) Apple design executive Alan Dye poached by Meta in major coup Apple design boss Alan Dye departing for Meta Alan.app Apple announces executive transitions Apple chief Johny Srouji confirms he isn't going anywhere DAVE'S PRO-TIP OF THE WEEK: Get a Callback Reminder for a Missed Call (13:10) JUST THE HEADLINES: (19:10) End-to-end encrypted smart toilet camera is not actually end-to-end encrypted Cloudflare says it has fended off 416 billion AI bot scrape requests in five months Amazon starts testing 'ultra-fast' 30-minute deliveries Woman hailed as hero for smashing man's Meta smart glasses on subway All of Russia's Porsches were bricked by a mysterious satellite outage HBO Max forgot to remove a 'vomit hose' crew member in Mad Men 4K Oregon sportswear giant Columbia pledges to give 'the company' to anyone who can prove the Earth is flat TAKES: YouTube introduces its own version of Spotify Wrapped for videos (21:30) Even Microsoft's retro holiday sweaters are having Copilot forced upon them (26:35) Bending Spoons to acquire Eventbrite in $500M all-cash deal (30:00) Netflix agrees to buy Warner Bros. in an $82.7-billion deal that will transform Hollywood (32:05) Microsoft December 2025 Patch Tuesday (34:55) BONUS ODD TAKE: An online collection of found cassette tapes (36:25) PICKS OF THE WEEK: Dave: Headlamp Rechargeable 2PCS, 230° Wide Beam Head Lamp LED with Motion Sensor for Adults - Camping Accessories Gear, Waterproof Head Light Flashlight for Hiking, Running, Repairing, Fishing, Cycling (39:30) Nate: Bambu Lab A1 Mini 3D Printer, Support Multi-Color 3D Printing, Set Up in 20 Mins, High Speed & Precision, Full-Auto Calibration & Active Flow Rate Compensation, ≤48 dB Quiet FDM 3D Printers (44:00) Gadget Gift Guide for Geeks RAMAZON PURCHASE OF THE WEEK (52:35)
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss small language models (SLMs) and how they differ from large language models (LLMs). You will understand the crucial differences between massive large language models and efficient small language models. You’ll discover how combining SLMs with your internal data delivers superior, faster results than using the biggest AI tools. You will learn strategic methods to deploy these faster, cheaper models for mission-critical tasks in your organization. You will identify key strategies to protect sensitive business information using private models that never touch the internet. Watch now to future-proof your AI strategy and start leveraging the power of small, fast models today! Watch the video here: https://youtu.be/XOccpWcI7xk Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-what-are-small-language-models.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s *In-Ear Insights*, let’s talk about small language models. Katie, you recently came across this and you’re like, okay, we’ve heard this before. What did you hear? Katie Robbert: As I mentioned on a previous episode, I was sitting on a panel recently and there was a lot of conversation around what generative AI is. The question came up of what do we see for AI in the next 12 months? Which I kind of hate that because it’s so wide open. But one of the panelists responded that SLMs were going to be the thing. I sat there and I was listening to them explain it and they’re small language models, things that are more privatized, things that you keep locally. I was like, oh, local models, got it. Yeah, that’s already a thing. But I can understand where moving into the next year, there’s probably going to be more of a focus on it. I think that the term local model and small language model in this context was likely being used interchangeably. I don’t believe that they’re the same thing. I thought local model, something you keep literally locally in your environment, doesn’t touch the internet. We’ve done episodes about that which you can catch on our livestream if you go to TrustInsights.ai YouTube, go to the Soap playlist. We have a whole episode about building your own local model and the benefits of it. But the term small language model was one that I’ve heard in passing, but I’ve never really dug deep into it. Chris, in as much as you can, in layman’s terms, what is a small language model as opposed to a large language model, other than— Christopher S. Penn: Is the best description? There is no generally agreed upon definition other than it’s small. All language models are measured in terms of the number of tokens they were trained on and the number of parameters they have. Parameters are basically the number of combinations of tokens that they’ve seen. So a big model like Google Gemini, GPT 5.1, whatever we’re up to this week, Claude Opus 4.5—these models are anywhere between 700 billion and 2 to 3 trillion parameters. They are massive. You need hundreds of thousands of dollars of hardware just to even run it, if you could. And there are models. You nailed it exactly. Local models are models that you run on your hardware. There are local large language models—Deep Seq, for example. Deep Seq is a Chinese model: 671 billion parameters. You need to spend a minimum of $50,000 of hardware just to turn it on and run it. Kimmy K2 instruct is 700 billion parameters. I think Alibaba Quinn has a 480 billion parameter. These are, again, you’re spending tens of thousands of dollars. Models are made in all these different sizes. So as you create models, you can create what are called distillates. You can take a big model like Quinn 3 480B and you can boil it down. You can remove stuff from it till you get to an 80 billion parameter version, a 30 billion parameter version, a 3 billion parameter version, and all the way down to 100 million parameters, even 10 million parameters. Once you get below a certain point—and it varies based on who you talk to—it’s no longer a large language model, it’s a small English model. Because the smaller the model gets, the dumber it gets, the less information it has to work with. It’s like going from the Oxford English Dictionary to a pamphlet. The pamphlet has just the most common words. The Oxford English Dictionary has all the words. Small language models, generally these days people mean roughly 8 billion parameters and under. There are things that you can run, for example, on a phone. Katie Robbert: If I’m following correctly, I understand the tokens, the size, pamphlet versus novel, that kind of a thing. Is a use case for a small language model something that perhaps you build yourself and train solely on your content versus something externally? What are some use cases? What are the benefits other than cost and storage? What are some of the benefits of a small language model versus a large language model? Christopher S. Penn: Cost and speed are the two big ones. They’re very fast because they’re so small. There has not been a lot of success in custom training and tuning models for a specific use case. A lot of people—including us two years ago—thought that was a good idea because at the time the big models weren’t much better at creating stuff in Katie Robbert’s writing style. So back then, training a custom version of say Llama 2 at the time to write like Katie was a good idea. Today’s models, particularly when you look at some of the open weights models like Alibaba Quinn 3 Next, are so smart even at small sizes that it’s not worth doing that because instead you could just prompt it like you prompt ChatGPT and say, “Here’s Katie’s writing style, just write like Katie,” and it’s smart enough to know that. One of the peculiarities of AI is that more review is better. If you have a big model like GPT 5.1 and you say, “Write this blog post in the style of Katie Robbert,” it will do a reasonably good job on that. But if you have a small model like Quinn 3 Next, which is only £80 billion, and you have it say, “Write a blog post in style of Katie Robbert,” and then re-invoke the model, say, “Review the blog post to make sure it’s in style Katie Robbert,” and then have it review it again and say, “Now make sure it’s the style of Katie Robbert.” It will do that faster with fewer resources and deliver a much better result. Because the more passes, the more reviews it has, the more time it has to work on something, the better tends to perform. The reason why you heard people talking about small language models is not because they’re better, but because they’re so fast and so lightweight, they work well as agents. Once you tie them into agents and give them tool handling—the ability to do a web search—that small model in the same time it takes a GPT 5.1 and a thousand watts of electricity, a small model can run five or six times and deliver a better result than the big one in that same amount of time. And you can run it on your laptop. That’s why people are saying small language models are important, because you can say, “Hey, small model, do this. Check your work, check your work again, make sure it’s good.” Katie Robbert: I want to debunk it here now that in terms of buzzwords, people are going to be talking about small language models—SLMs. It’s the new rage, but really it’s just a more efficient version, if I’m following correctly, when it’s coupled in an agentic workflow versus having it as a standalone substitute for something like a ChatGPT or a Gemini. Christopher S. Penn: And it depends on the model too. There’s 2.1 million of these things. For example, IBM WatsonX, our friends over at IBM, they have their own model called Granite. Granite is specifically designed for enterprise environments. It is a small model. I think it’s like 8 billion to 10 billion parameters. But it is optimized for tool handling. It says, “I don’t know much, but I know that I have tools.” And then it looks at its tool belt and says, “Oh, I have web search, I have catalog search, I have this search, I have all these tools.” Even though I don’t know squat about squat, I can talk in English and I can look things up. In the WatsonX ecosystem, Granite performs really well, performs way better than a model even a hundred times the size, because it knows what tools to invoke. Think of it like an intern or a sous chef in a kitchen who knows what appliances to use and in which order. The appliances are doing all the work and the sous chef is, “I’m just going to follow the recipe and I know what appliances to use. I don’t have to know how to cook. I just got to follow the recipes.” As opposed to a master chef who might not need all those appliances, but has 40 years of experience and also costs you $250,000 in fees to work with. That’s kind of the difference between a small and a large language model is the level of capability. But the way things are going, particularly outside the USA and outside the west, is small models paired with tool handling in agentic environments where they can dramatically outperform big models. Katie Robbert: Let’s talk a little bit about the seven major use cases of generative AI. You’ve covered them extensively, so I probably won’t remember all seven, but let me see how many I got. I got to use my fingers for this. We have summarization, generation, extraction, classification, synthesis. I got two more. I lost. I don’t know what are the last two? Christopher S. Penn: Rewriting and question answering. Katie Robbert: Got it. Those are always the ones I forget. A lot of people—and we talked about this. You and I talk about this a lot. You talk about this on stage and I talked about this on the panel. Generation is the worst possible use for generative AI, but it’s the most popular use case. When we think about those seven major use cases for generative AI, can we sort of break down small language models versus large language models and what you should and should not use a small language model for in terms of those seven use cases? Christopher S. Penn: You should not use a small language model for generation without extra data. The small language model is good at all seven use cases, if you provide it the data it needs to use. And the same is true for large language models. If you’re experiencing hallucinations with Gemini or ChatGPT, whatever, it’s probably because you haven’t provided enough of your own data. And if we refer back to a previous episode on copyright, the more of your own data you provide, the less you have to worry about copyrights. They’re all good at it when you provide the useful data with it. I’ll give you a real simple example. Recently I was working on a piece of software for a client that would take one of their ideal customer profiles and a webpage of the clients and score the page on 17 different criteria of whether the ideal customer profile would like that page or not. The back end language model for this system is a small model. It’s Meta Llama 4 Scout, which is a very small, very fast, not a particularly bright model. However, because we’re giving it the webpage text, we’re giving it a rubric, and we’re giving it an ICP, it knows enough about language to go, “Okay, compare.” This is good, this is not good. And give it a score. Even though it’s a small model that’s very fast and very cheap, it can do the job of a large language model because we’re providing all the data with it. The dividing line to me in the use cases is how much data are you asking the model to bring? If you want to do generation and you have no data, you need a large language model, you need something that has seen the world. You need a Gemini or a ChatGPT or Claude that’s really expensive to come up with something that doesn’t exist. But if you got the data, you don’t need a big model. And in fact, it’s better environmentally speaking if you don’t use a big heavy model. If you have a blog post, outline or transcript and you have Katie Robbert’s writing style and you have the Trust Insights brand style guide, you could use a Gemini Flash or even a Gemini Flash Light, the cheapest of their models, or Claude Haiku, which is the cheapest of their models, to dash off a blog post. That’ll be perfect. It will have the writing style, will have the content, will have the voice because you provided all the data. Katie Robbert: Since you and I typically don’t use—I say typically because we do sometimes—but typically don’t use large language models without all of that contextual information, without those knowledge blocks, without ICPs or some sort of documentation, it sounds like we could theoretically start moving off of large language models. We could move to exclusively small language models and not be sacrificing any of the quality of the output because—with the caveat, big asterisks—we give it all of the background data. I don’t use large language models without at least giving it the ICP or my knowledge block or something about Trust Insights. Why else would I be using it? But that’s me personally. I feel that without getting too far off the topic, I could be reducing my carbon footprint by using a small language model the same way that I use a large language model, which for me is a big consideration. Christopher S. Penn: You are correct. A lot of people—it was a few weeks ago now—Cloudflare had a big outage and it took down OpenAI, took down a bunch of other people, and a whole bunch of people said, “I have no AI anymore.” The rest of us said, “Well, you could just use Gemini because it’s a different DNS.” But suppose the internet had a major outage, a major DNS failure. On my laptop I have Quinn 3, I have it running inside LM Studio. I have used it on flights when the internet is highly unreliable. And because we have those knowledge blocks, I can generate just as good results as the major providers. And it turns out perfectly. For every company. If you are dependent now on generative AI as part of your secret sauce, you have an obligation to understand small language models and to have them in place as a backup system so that when your provider of choice goes down, you can keep doing what you do. Tools like LM Studio, Jan, AI, Cobol, cpp, llama, CPP Olama, all these with our hosting systems that you run on your computer with a small language model. Many of them have drag and drop your attachments in, put in your PDFs, put in your knowledge blocks, and you are off to the races. Katie Robbert: I feel that is going to be a future live stream for sure. Because the first question, you just sort of walk through at a high level how people get started. But that’s going to be a big question: “Okay, I’m hearing about small language models. I’m hearing that they’re more secure, I’m hearing that they’re more reliable. I have all the data, how do I get started? Which one should I choose?” There’s a lot of questions and considerations because it still costs money, there’s still an environmental impact, there’s still the challenge of introducing bias, and it’s trained on who knows. Those things don’t suddenly get solved. You have to sort of do your due diligence as you’re honestly introducing any piece of technology. A small language model is just a different piece of technology. You still have to figure out the use cases for it. Just saying, “Okay, I’m going to use a small language model,” doesn’t necessarily guarantee it’s going to be better. You still have to do all of that homework. I think that, Chris, our next step is to start putting together those demos of what it looks like to use a small language model, how to get started, but also going back to the foundation because the foundation is the key to all of it. What knowledge blocks should you have to use both a small and a large language model or a local model? It kind of doesn’t matter what model you’re using. You have to have the knowledge blocks. Christopher S. Penn: Exactly. You have to have the knowledge blocks and you have to understand how the language models work and know that if you are used to one-shotting things in a big model, like “make blog posts,” you just copy and paste the blog post. You cannot do that with a small language model because they’re not as capable. You need to use an agent flow with small English models. Tools today like LM Studio and anythingLLM have that built in. You don’t have to build that yourself anymore. It’s pre-built. This would be perfect for a live stream to say, “Here’s how you build an agent flow inside anythingLLM to say, ‘Write the blog post, review the blog post for factual correctness based on these documents, review the blog post for writing style based on this document, review this.'” The language model will run four times in a row. To you, the user, it will just be “write the blog post” and then come back in six minutes, and it’s done. But architecturally there are changes you would need to make sure that it meets the same quality of standard you’re used to from a larger model. However, if you have all the knowledge blocks, it will work just as well. Katie Robbert: And here I was thinking we were just going to be describing small versus large, but there’s a lot of considerations and I think that’s good because in some ways I think it’s a good thing. Let me see, how do I want to say this? I don’t want to say that there are barriers to adoption. I think there are opportunities to pause and really assess the solutions that you’re integrating into your organization. Call them barriers to adoption. Call them opportunities. I think it’s good that we still have to be thoughtful about what we’re bringing into our organization because new tech doesn’t solve old problems, it only magnifies it. Christopher S. Penn: Exactly. The other thing I’ll point out with small language models and with local models in particular, because the use cases do have a lot of overlap, is what you said, Katie—the privacy angle. They are perfect for highly sensitive things. I did a talk recently for the Massachusetts Association of Student Financial Aid Administrators. One of the biggest tasks is reconciling people’s financial aid forms with their tax forms, because a lot of people do their taxes wrong. There are models that can visually compare and look at it to IRS 990 and say, “Yep, you screwed up your head of household declarations, that screwed up the rest of your taxes, and your financial aid is broke.” You cannot put that into ChatGPT. I mean, you can, but you are violating a bunch of laws to do that. You’re violating FERPA, unless you’re using the education version of ChatGPT, which is locked down. But even still, you are not guaranteed privacy. However, if you’re using a small model like Quinn 3VL in a local ecosystem, it can do that just as capably. It does it completely privately because the data never leaves your laptop. For anyone who’s working in highly regulated industries, you really want to learn small language models and local models because this is how you’ll get the benefits of AI, of generative AI, without nearly as many of the risks. Katie Robbert: I think that’s a really good point and a really good use case that we should probably create some content around. Why should you be using a small language model? What are the benefits? Pros, cons, all of those things. Because those questions are going to come up especially as we sort of predict that small language model will become a buzzword in 2026. If you haven’t heard of it now, you have. We’ve given you sort of the gist of what it is. But any piece of technology, you really have to do your homework to figure out is it right for you? Please don’t just hop on the small language model bandwagon, but then also be using large language models because then you’re doubling down on your climate impact. Christopher S. Penn: Exactly. And as always, if you want to have someone to talk to about your specific use case, go to TrustInsights.ai/contact. We obviously are more than happy to talk to you about this because it’s what we do and it is an awful lot of fun. We do know the landscape pretty well—what’s available to you out there. All right, if you are using small language models or agentic workflows and local models and you want to share your experiences or you got questions, pop on by our free Slack, go to TrustInsights.ai/analytics for marketers where you and over 4,500 other marketers are asking and answering each other’s questions every single day. Wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on instead, go to TrustInsights.ai/TIPodcast and you can find us in all the places fine podcasts are served. Thanks for tuning in. I’ll talk to you on the next one. Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology and MarTech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, Dall-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In-Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights is adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models. Yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Data Storytelling—this commitment to clarity and accessibility extends to Trust Insights educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
S2 Underground Nexus (Submit Tips Here): https://nexus-s2underground.hub.arcgis.com/ Research Notes/Bibliography can be found here: https://publish.obsidian.md/s2underground Common Intelligence Picture: https://www.arcgis.com/home/item.html?id=204a59b01f4443cd96718796fd102c00 Border Crisis Map: https://www.arcgis.com/home/item.html?id=7f13eda1f301431e98a7ac0393b0e6b0 TOC Dashboard: https://experience.arcgis.com/experience/ebe374c40c1a4231a06075155b0e8cb9/ 00:00 - Global Strategic Concerns 01:48 - Kinetic Events 07:30 - Cloudflare 09:03 - Charlotte Stabbing 10:25 - Virginia 12:48 - Portland 17:10 - Across The Pond 26:01 - What We Can Do Download the GhostNet plan here! https://github.com/s2underground/GhostNet The text version of the Wire can be found on Twitter: https://twitter.com/s2_underground And on our Wire Telegram page here: https://t.me/S2undergroundWire If you would like to support us, we're on Patreon! https://www.patreon.com/user?u=30479515 Disclaimer: No company sponsored this video. In fact, we have ZERO sponsors. We are funded 100% by you, the viewer. All of our funding comes from direct support from platforms like Patreon, or from ad revenue on YouTube. Without your support, I simply could not do this work at all, so to those of you who chose to support my efforts, I am eternally thankful. Odysee: https://odysee.com/@S2Underground:7 Gab: https://gab.com/S2underground Rumble: https://rumble.com/c/S2Underground BitChute: https://www.bitchute.com/channel/P2NMGFdt3gf3/ Just a few reminders for everyone who's just become aware of us, in order to keep these briefings from being several hours long, I can't cover everything. I'm probably covering 1% of the world events when we conduct these briefings, so please remember that if I left it out, it doesn't necessarily mean that it's unimportant. Also, remember that I do these briefings quite often, so I might have covered an issue previously that you might not see if you are only watching our most recent videos. I'm also doing this in my spare time, so again I fully admit that these briefings aren't even close to being perfect; I'm going for a healthy blend of speed and quality. If I were to wait and only post a brief when it's "perfect" I would never post anything at all. So expect some minor errors here and there. If there is a major error or correction that needs to be made, I will post it here in the description, and verbally address it in the next briefing. Also, thanks for reading this far. It is always surprising the number of people that don't actually read the description box to find more information. This content is purely educational and does not advocate for violating any laws. Do not violate any laws or regulations. This is not legal advice. Consult with your attorney. Our Reading List! https://www.goodreads.com/user/show/133747963-s2-actual The War Kitchen Channel! https://www.youtube.com/channel/UCYmtpjXT22tAWGIlg_xDDPA
With our provider struggling a little following the various Cloudflare issues of late, Gary and Jonathan call up Nebula and BSFA Award winner Isabel J. Kim to talk about what she's been reading, her holiday favourites, what she had out in the past year, and the upcoming publication of her debut novel Sublimation next year. As always, our thinks to Isabel for making time to talk to us today. We hope you enjoy the episode.
On today's Tech and Science Daily from The Standard, we break down new TfL lift tech for step-free travel, explain a major UCL study on how air pollution can weaken the benefits of exercise, and look at Cloudflare's latest outage hitting LinkedIn and Zoom. We also cover a huge neutrino collaboration that could explain why the universe exists, December's PlayStation Plus free games and upcoming Game Awards 2025, and Amazon's new Alexa Plus scene-skipping feature for Fire TV. Hosted on Acast. See acast.com/privacy for more information.
Timestamps: 0:00 no shortage of sass 0:07 Netflix agrees to buy Warner Bros for $82B 1:39 W11 bugs vs. SteamOS performance 2:58 Meta news deals, AI support fix 4:24 SHARGE Retractable 3-in-1 power bank 5:14 QUICK BITS INTRO 5:24 Google Antigravity wipes entire drive 6:03 Cloudflare, and downdetector, was down 6:41 YouTube AI slop tutorials 7:15 3D-printed cornea implanted in human 7:46 Kohler flushes privacy down the drain NEWS SOURCES: https://lmg.gg/b1nwH Learn more about your ad choices. Visit megaphone.fm/adchoices
Tune in for a deep dive into the November 18 Cloudflare outage that impacted multiple services including X, OpenAI, and Anthropic—and explore key takeaways for ITOps teams. For insights on the other recent outage that Cloudflare experienced on December 5, see this blog post: https://www.thousandeyes.com/blog/cloudflare-outage-analysis-december-5-2025 ——— CHAPTERS 00:00 Intro 00:55 The Nov. 18 Cloudflare Outage 02:34 Configuration Changes & Outages 03:45 Diagnosing the Fault Domain 05:47 Why Outage Recovery Can Take Time 10:43 Are Cloud Outages Increasing? 12:09 ITOps Best Practices 14:01 Outage Trends: By the Numbers 15:35 Get in Touch ——— Explore the Cloudflare outage further in the ThousandEyes platform (no login required): https://ahhplivtvmhdmbduwcuonefullpkkohp.share.thousandeyes.com/ For additional insights, check out The Internet Outage Survival Kit: https://www.thousandeyes.com/resources/the-internet-outage-survival-kit?utm_source=soundcloud&utm_medium=referral&utm_campaign=fy26q2_internetreport_q2fy26ep4_podcast ——— Want to get in touch? If you have questions, feedback, or guests you would like to see featured on the show, send us a note at InternetReport@thousandeyes.com. Or follow us on LinkedIn or X. ——— ABOUT THE INTERNET REPORT This is The Internet Report, a podcast uncovering what's working and what's breaking on the Internet—and why. Tune in to hear ThousandEyes' Internet experts dig into some of the most interesting outage events from the past couple weeks, discussing what went awry—was it the Internet, or an application issue? Plus, learn about the latest trends in ISP outages, cloud network outages, collaboration network outages, and more. Catch all the episodes on YouTube or your favorite podcast platform: - Apple Podcasts: https://podcasts.apple.com/us/podcast/the-internet-report/id1506984526 - Spotify: https://open.spotify.com/show/5ADFvqAtgsbYwk4JiZFqHQ?si=00e9c4b53aff4d08&nd=1&dlsi=eab65c9ea39d4773 - SoundCloud: https://soundcloud.com/ciscopodcastnetwork/sets/the-internet-report
This is a recap of the top 10 posts on Hacker News on December 05, 2025. This podcast was generated by wondercraft.ai (00:30): Netflix to Acquire Warner BrosOriginal post: https://news.ycombinator.com/item?id=46160315&utm_source=wondercraft_ai(01:56): Cloudflare was downOriginal post: https://news.ycombinator.com/item?id=46158191&utm_source=wondercraft_ai(03:22): Cloudflare outage on December 5, 2025Original post: https://news.ycombinator.com/item?id=46162656&utm_source=wondercraft_ai(04:48): Netflix's AV1 Journey: From Android to TVs and BeyondOriginal post: https://news.ycombinator.com/item?id=46155135&utm_source=wondercraft_ai(06:14): BMW PHEV: Safety fuse replacement is extremely expensiveOriginal post: https://news.ycombinator.com/item?id=46155619&utm_source=wondercraft_ai(07:40): The US polluters that are rewriting the EU's human rights and climate lawOriginal post: https://news.ycombinator.com/item?id=46159193&utm_source=wondercraft_ai(09:06): Gemini 3 Pro: the frontier of vision AIOriginal post: https://news.ycombinator.com/item?id=46163308&utm_source=wondercraft_ai(10:32): Most technical problems are people problemsOriginal post: https://news.ycombinator.com/item?id=46160773&utm_source=wondercraft_ai(11:58): UniFi 5GOriginal post: https://news.ycombinator.com/item?id=46157594&utm_source=wondercraft_ai(13:24): Trick users and bypass warnings – Modern SVG Clickjacking attacksOriginal post: https://news.ycombinator.com/item?id=46155085&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Tune in for a deep dive into the November 18 Cloudflare outage that impacted multiple services including X, OpenAI, and Anthropic—and explore key takeaways for ITOps teams. For insights on the other recent outage that Cloudflare experienced on December 5, see this blog post: https://www.thousandeyes.com/blog/cloudflare-outage-analysis-december-5-2025 ——— CHAPTERS 00:00 Intro 00:55 The Nov. 18 Cloudflare Outage 02:34 Configuration Changes & Outages 03:45 Diagnosing the Fault Domain 05:47 Why Outage Recovery Can Take Time 10:43 Are Cloud Outages Increasing? 12:09 ITOps Best Practices 14:01 Outage Trends: By the Numbers 15:35 Get in Touch ——— Explore the Cloudflare outage further in the ThousandEyes platform (no login required): https://ahhplivtvmhdmbduwcuonefullpkkohp.share.thousandeyes.com/ For additional insights, check out The Internet Outage Survival Kit: https://www.thousandeyes.com/resources/the-internet-outage-survival-kit?utm_source=wistia&utm_medium=referral&utm_campaign=fy26q2_internetreport_q2fy26ep4_podcast ——— Want to get in touch? If you have questions, feedback, or guests you would like to see featured on the show, send us a note at InternetReport@thousandeyes.com. Or follow us on LinkedIn or X: @thousandeyes ——— ABOUT THE INTERNET REPORT This is The Internet Report, a podcast uncovering what's working and what's breaking on the Internet—and why. Tune in to hear ThousandEyes' Internet experts dig into some of the most interesting outage events from the past couple weeks, discussing what went awry—was it the Internet, or an application issue? Plus, learn about the latest trends in ISP outages, cloud network outages, collaboration network outages, and more. Catch all the episodes on YouTube or your favorite podcast platform: - Apple Podcasts: https://podcasts.apple.com/us/podcast/the-internet-report/id1506984526 - Spotify: https://open.spotify.com/show/5ADFvqAtgsbYwk4JiZFqHQ?si=00e9c4b53aff4d08&nd=1&dlsi=eab65c9ea39d4773 - SoundCloud: https://soundcloud.com/ciscopodcastnetwork/sets/the-internet-report
Chinese threat actors deploy Brickstorm malware. The critical React2Shell vulnerability is under active exploitation. Cloudflare's emergency patch triggered a brief global outage. Phishing kits pivot to fake e-commerce sites. The European Commission fines X(Twitter) €120 million for violating the Digital Services Act. Predator spyware has a new bag of tricks. A Russian physicist gets 21 years in prison for cybercrimes. Twin brothers are arrested for allegedly stealing and destroying government data. Our guest is Blair Canavan, Director of Alliances - PKI & PQC Portfolio from Thales, discussing post quantum cryptography. Smart toilet encryption claims don't hold water. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today on our Industry Voices segment, we are joined by Blair Canavan, Director of Alliances - PKI & PQC Portfolio from Thales, discussing post quantum cryptography (PQC). Listen to Blair's full conversation here. Selected Reading Chinese hackers used Brickworm malware to breach critical US infrastructure (TechRadar) React2Shell critical flaw actively exploited in China-linked attacks (BleepingComputer) Cloudflare blames today's outage on emergency React2Shell patch (Bleeping Computer) SMS Phishers Pivot to Points, Taxes, Fake Retailers (Krebs on Security) Threat Spotlight: Introducing GhostFrame, a new super stealthy phishing kit (Barracuda) EU issues €120 million fine to Elon Musk's X under rules to tackle disinformation (The Record) Predator spyware uses new infection vector for zero-click attacks (Bleeping Computer) Russian scientist sentenced to 21 years on treason, cyber sabotage charges (The Record) Twins with hacking history charged in insider data breach affecting multiple federal agencies (Cyberscoop) ‘End-to-end encrypted' smart toilet camera is not actually end-to-end encrypted (TechCrunch)- kicker Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry's most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Ever wondered what really keeps the Internet running - and what happens when it all goes sideways? The latest Cloudflare outage served up a reality check, exposing just how much of our digital world hangs together with a mix of duct tape, toothpicks, and a whole lot of hope. In this episode we dive into how this outage sent shockwaves through everything from simple website clicks to healthcare payment systems, and why most folks had no idea Cloudflare was even a linchpin for their daily operations. More info at HelpMeWithHIPAA.com/538
Tune in live every weekday Monday through Friday from 9:00 AM Eastern to 10:15 AM.Buy our NFTJoin our DiscordCheck out our TwitterCheck out our YouTubeDISCLAIMER: The views shared on this show are the hosts' opinions only and should not be taken as financial advice. This content is for entertainment and informational purposes.
AP correspondent Haya Panjwani reports on another cloud services outage.
Matthew Goldstein joins Ari Paparo and Eric Franchi to break down how AI is reshaping publishing, why traffic declines are less scary than content theft, and what it will take for publishers to get paid in an agentic future. They dig into licensing deals, bot blocking, Microsoft's content marketplace, and the idea of a real-time exchange for fresh, metadata-rich articles. The conversation also hits Google's growing advantage, OpenAI's code-red moment, and what all of this means for ads, agencies, and where digital media revenue goes next. Takeaways Publishers are more focused on getting compensated for AI training and retrieval than on raw traffic drops. Bot blocking is rising, but without a shared, reliable block list, it stays messy and uneven. Content marketplaces only work if buyers show up, and right now sellers massively outnumber demand. A split web is coming: humans browse homepages, while agents pull content at scale from trusted sources. Google's integrated ecosystem gives it a structural edge over everyone in search plus AI. Chapters 00:09 Welcome and setup for AI and content licensing discussion with MSG 02:10 What happened to MSG's newsletter and why LinkedIn feels better for feedback 05:11 Birthdays, Spotify Wrapped, and shout outs to the pod's biggest fans 08:18 State of publishers right now: traffic concerns vs AI taking content 10:26 Upfront licensing deals: why the headlines cooled off and what renewals mean 13:12 Blocking AI bots: Cloudflare rollout, inconsistency, and need for a shared list 15:20 Human web vs agentic web and why the publisher business model must change 18:20 Content marketplaces: how many exist, the demand problem, and Microsoft's approach 20:36 Marketplace mechanics explained through a finance app example 24:00 Real time per article payments and RAG style usage as the likely model 28:21 What marketplaces imply for publisher ads and MSG's timeline prediction 31:30 News: BroadSign acquires Place Exchange, and why out of home is heating up 35:17 News: Omnicom IPG deal closes and what it means for agencies 38:41 News: OpenAI code red and the rapid rise of Google Gemini 44:52 Rumors of ads in AI search and in ChatGPT 47:19 LLM referral traffic to retail rises over Black Friday weekend 48:40 Trade Desk talent moves and pricing pressure Learn more about your ad choices. Visit megaphone.fm/adchoices
You're probably using AI agents without even knowing it.
Jake and Michael discuss all the latest Laravel releases, tutorials, and happenings in the community.This episode is sponsored by CodeRabbit; Smart CLI Reviews act as quality gates for Codex, Claude, Gemini, and you.Show linksBlade @hasStack Directive Added in Laravel 12.39 Time Interval Helpers in Laravel 12.40 Pause a Queue for a Given Number of Seconds in Laravel 12 PHP 8.5 is released with the pipe operator, URI extension, new array functions, and more Introducing Mailviews Early Access Prevent Disposable Email Registrations with Email Utilities for Laravel A DynamoDB Driver for the Laravel Auditing Package Build Production-ready APIs in Laravel with Tyro TutorialsSeparate your Cloudflare page cache with a middleware group PostgreSQL vs. MongoDB for Laravel: Choosing the Right Database Modernizing Code with Rector - Laravel In Practice EP12 Static Analysis Secrets - Laravel In Practice EP13
Jason Brown takes investors through software names he's watching for today's Big 3. He explains why Salesforce (CRM) adding clarity to A.I. strategy is key for growth, CrowdStrike's (CRWD) financial strength in earnings, and Cloudflare's (NET) tactics in taking cybersecurity market share. Jason walks investors through his example options trades while Rick Ducat offers technical analysis through the charts. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/ About Schwab Network - https://schwabnetwork.com/about
Send us a textWe break down Cloudflare's outage, why a small config change caused big waves, and what better guardrails could look like. We then unpack AWS and Google's cross‑cloud link, Megaport's move into bare metal and GPUs, Webex adding deepfake defenses, and a new startup aiming to tune AI networks at microsecond speed.• Cloudflare outage root cause and fallout• Automation guardrails, validation and rollbacks• AWS–Google cross‑cloud connectivity preview• Pricing, routing and policy gaps to watch• Megaport acquires Latitude SH for compute• Bare metal and GPU as a service near clouds• Webex integrates deepfake and fraud detection• Accuracy risks, UX and escalation paths• Apstra founders launch Aria for AI networks• Microburst telemetry, closed‑loop control and SLAsIf you enjoyed this please give us some feedback or share this with a friend we would love to hear from you as well and we will see you in two weeks with another episodePurchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj
In "This Week in WordPress Episode 357," Nathan Wrigley, Michelle Frechette, Steve Burge, and Marcus Burnette cover a playful Cards Against Humanity Black Friday sale, Michelle's tech award nomination, and the upcoming WordPress 6.9 release. They discuss the return of a three-release cycle for WordPress, plans for core AI integration, and recent Cloudflare outages. Other topics include WordPress security mishaps, accessibility, PublishPress plugin updates, creating a Wapuu for WordCamp Asia, and the new AI Experiments canonical plugin. The episode blends WordPress news, community events, and lively discussion. Oh, and dad jokes!
Fifteen years in, it can still feel like “we're just getting started.”Michelle Zatlyn, co-founder of Cloudflare, returns to Grit with Joubin Mirzadegan to share how Cloudflare secures the internet for millions, with a vision built to last generations.She also shares why staying close to reality and to customers becomes harder as success compounds, and how Cloudflare is helping content creators regain control in an AI driven internet.Guest: Michelle Zatlyn, co-founder and President of CloudflareConnect with Michelle ZatlynXLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comLearn more about Kleiner Perkins
Could banning VPNs really become law in the US? This episode breaks down the jaw-dropping legislation in Wisconsin and Michigan that targets VPN access for everyone, not just kids—and what it means for your digital privacy. The EU finally comes to its "Chat Control" senses. Windows 11 to include SysInternals Sysmon natively. Chrome's tabs (optionally) go vertical. The Pentagon begins its investment in warfare AI. Members of the military are being doxed by social media. A look inside the futility of trying to corral AI. The surprising lack of WhatsApp user privacy. Exactly what happened last week to Cloudflare? Britain (over)reacts to the Jaguar Land Rover incident. Project: Hail Mary's second trailer released. US state legislatures want to ban VPNs altogether Show Notes - https://www.grc.com/sn/SN-1053-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: bigid.com/securitynow zscaler.com/security Melissa.com/twit hoxhunt.com/securitynow 1password.com/securitynow
A judge is racing to break up Google's advertising empire before they can appeal, while Microsoft's Copilot stumbles on camera. Australia's sweeping social bans, Roblox's selfie requirement, and flawed AI moderation spark sharp debate on what happens when online gatekeeping gets serious. Top MAGA Influencers Accidentally Unmasked as Foreign Trolls NetChoice Sues Virginia To Block Its One-Hour Social Media Limit For Kids Roblox is requiring 9yo kids to submit a video selfie to prove age Outage at Cloudflare Disrupts Parts of the Internet It's not just you, many websites are not working this morning amid Cloudflare outage Cloudflare-related variation on the classic XKCD Trump's DOGE Is Dead and We Won't Miss It Meta Wins FTC Antitrust Trial Over Instagram, WhatsApp Deals Europe is scaling back its landmark privacy and AI laws Talking to Windows' Copilot AI makes a computer feel incompetent 780,000 Windows Users Downloaded Linux Distro Zorin OS in the Last 5 Weeks Fortnite is getting Unity games Oops. Cryptographers cancel election results after losing decryption key. SEC Dismisses Case Against SolarWinds, Top Security Officer Google Starts Testing Ads In AI Mode A decision about breaking up Google's adtech monopoly is on the horizon Work is "optional" and irrelevant money: Musk's creepy utopian dream White House Tries to axe the GAIN act (Act that would have prevented AI tech from being sold to other nations.) Host: Leo Laporte Guests: Fr. Robert Ballecer, SJ, Molly White, and Wesley Faulkner Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit helixsleep.com/twit expressvpn.com/twit zscaler.com/security deel.com/twit
Podcasting 2.0 November 21st 2025 Episode 242: "Poddy Pop Up" Adam & Dave Discuss how the Cloudflare outage affected podcasting and have solutions to the index synch problem ShowNotes We are LIT Alby-Hub spam issue Cloudflare Outage Decentralization RSS.com goes Hyper-Local! Feed re-directs Transcript Search What is Value4Value? - Read all about it at Value4Value.info V4V Stats ast Modified 11/21/2025 14:30:08 by Freedom Controller
Congress approves Epstein files release, new Diddy charges, Tom Cruise on the attack, Georgia coed sends temperatures soaring, in search of a Hooters vest, and the Italian Kardashian gets busted. Register to win tickets to Michigan vs OSU right here thanks to Hall Financial. Get some merch today before it flies off the shelf. ML Soul of Detroit has merch available as well. Hooters is basically dead in Michigan. We call the last surviving one in Saginaw in search of Drew's favorite vest. The US House of Representatives and Senate voted to release “all” the Epstein files. Louisianna Rep. Clay Higgins is the only person to vote against releasing the files. Hmmmmm. Donald Trump and Saudi Arabian Prince MBS hooked up today at the White House. There is a brand new criminal investigation against Diddy. We roll through multiple creepy moments from Sean Combs. Harley Summer (HarleyisBae) is the new ‘it girl' after the television caught a few seconds of her boobs at a Georgia football game. Her TikTok is dedicated to those boobies. More and more info is coming out about would-be-Trump-assassin Thomas Matthew Crooks. Radiohead is touring once again. Sports: Dan Wetzel popped up on ML Soul of Detroit today. Sorry Drew… but James Franklin is the new coach of Virginia Tech football. The Detroit Lions are letting Drew down. The Detroit Pistons are rolling along. Cloudflare had an issue today and knocked a bunch of websites (including ours) down. Shedeur Sanders had his house burglarized during his NFL debut. Some Real Housewives were also burgled. Scientology: Danny Masterson is whining from prison. Scientology is on the down low recently. Suri Cruise is now Suri Noelle. Tom Cruise is bashing Nicole Kidman. Trudi has finally watched the Mission Impossible movies. Jeff Bezos is funding the Met Gala. He's also renovating a home for Lauren Sanchez. Reminder that Adolf Hitler had a tiny pee pee. Elon Musk takes a shot at back at Billie Eilish. Chiara Ferragni is the Kim Kardashian of Italy. Now she's in hot water. Charlie Saunders is a gross OnlyFans creator and it cost her dad his job for some reason. Kim Kardashian wants to be a lawyer so bad, but never will. We've received ANOTHER dash cam of an accident while listening to our program. Some people are saying there may be a warrant out for the arrest of Stuttering John Melendez. Olivia Nuzzi is finally telling her side of the story. She famously had a platonic affair with RFK Jr. Her ex points out that she also nailed the infamous Gov. Mark Sanford. Rap music is falling hard off the charts. “Principled individual” Nicki Minaj is speaking to the UN. Drew has found John Lennon music he's never heard before. There is a new John and Yoko documentary out on HBO Max. If you'd like to help support the show… consider subscribing to our YouTube Channel, Facebook, Instagram and Twitter (Drew Lane, Marc Fellhauer, Trudi Daniels, Jim Bentley and BranDon).
The internet is not a cloud; it's a house of cards that is built on the foundation of a small number of companies. When one of those companies goes down, it becomes a widespread issue. We saw that today when Cloudflare's latest outage took out X, ChatGPT, Spotify, and other websites. Glenn breaks down how each outage affects you and ponders what happens when a foreign enemy goes after some of these critical companies. Texas has become the home of hundreds of mosques, as Glenn warns of the political religion that could lead to the complete takeover of Texas and the West. As more information comes out about the man who tried to assassinate President Trump, the FBI's failure comes to light. How did the FBI fail to stop somebody who openly threatened the president? Glenn and Stu further discuss the degenerate nature of the furry community. Justin Haskins, the Heartland Institute vice president and author of “The Next Big Crash,” joins to discuss the signs that a big economic crash is coming. Stu discusses an alleged love triangle involving RFK Jr., political reporter Olivia Nuzzi, and South Carolina politician Mark Sanford. Learn more about your ad choices. Visit megaphone.fm/adchoices
AWS, Azure, and Now Cloudflare SOMETHING IS GOING ON Become A Member http://youtube.com/timcastnews/join The Green Room - https://rumble.com/playlists/aa56qw_g-j0 BUY CAST BREW COFFEE TO FIGHT BACK - https://castbrew.com/ Join The Discord Server - https://timcast.com/join-us/ Hang Out With Tim Pool & Crew LIVE At - http://Youtube.com/TimcastIRL
President Trump muses about attacking other countries that are sending drugs into the U.S. Epstein files vote to take place today. Why was a congressional Democrat texting with Jeffrey Epstein during a House hearing in 2019? Was a congressman viewing porn on a recent flight? Why is news about Donald Trump's shooter, Thomas Crooks, only now coming to light? Jonathan Karl explains the moments leading up to Trump's 2024 VP pick. Ted Cruz 2028? NAACP pastor accuses Trump of wanting to be like Adolf Hitler. Florida Governor Ron DeSantis (R) on illegal and legal immigration. Zohran Mamdani's New York City in full effect! Bill Maher explains liberalism to a liberal. Former President George W. Bush appears on an awkward installment of the "ESPN ManningCast." 00:00 Pat Gray UNLEASHED! 02:20 Trump is Losing his Voice 03:25 Next Steps with Venezuela 06:26 Is America Going to Strike Other Countries? 14:58 Voting on the Epstein Files 19:45 Chuck Schumer on the Epstein Files 21:05 Hakeem Jeffries on Stacey Plaskett 22:30 FLASHBACK: Stacey Plaskett Texts with Epstein 25:34 Chris Cuomo Talks with Epstein's Brother 31:39 Fat Five 48:52 More Information on Thomas Crooks? 57:29 Cloudflare is Down 1:00:26 FLASHBACK: Jonathan Karl on Trump's Choice for VP 1:07:42 ICE in Charlotte, NC 1:08:08 Rev. Corine Mack on President Trump 1:09:40 Did You Vote for This??? 1:12:13 Football Update 1:16:12 Ron DeSantis on American Immigration Policies 1:18:39 Zohran Mamdani is Going to Arrest Benjamin Netanyahu? 1:21:23 Why are you Not Wearing a Hijab? 1:23:34 Dearborn Mayor is Not a Fan of the Term "Melting Pot" 1:29:09 Bill Maher Educates Patton Oswalt 1:33:14 George W. Bush Joins the Mannings Learn more about your ad choices. Visit megaphone.fm/adchoices
In this edition of Trendflix House, Jack and Miles discuss today's CloudFlare's outage, Trump being rude to a reporter?, an update on the Epstein Files, Trump's 'tariff dividend' plan, the Trump admin caping for that OTHER famous sex crim, and much more!See omnystudio.com/listener for privacy information.