Podcasts about github copilot

  • 557PODCASTS
  • 1,156EPISODES
  • 56mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 22, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about github copilot

Show all podcasts related to github copilot

Latest podcast episodes about github copilot

Machine Learning Guide
MLA 028 AI Agents

Machine Learning Guide

Play Episode Listen Later Feb 22, 2026 37:46


AI agents differ from chatbots by pursuing autonomous goals through the ReACT loop rather than responding to turn-based prompts. While coding agents are currently the most reliable due to verifiable feedback loops, the market is expanding into desktop and browser automation via tools like Claude co-work and open claw. Links Notes and resources at ocdevel.com/mlg/mla-28 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Fundamental Definitions Agent vs. Chatbot: Chatbots are turn-based and human-driven. Agents receive objectives and dynamically direct their own processes. The ReACT Loop: Every modern agent uses the cycle: Thought -> Action -> Observation. This interleaved reasoning and tool usage allows agents to update plans and handle exceptions. Performance: Models using agentic loops with self-correction outperform stronger zero-shot models. GPT-3.5 with an agent loop scored 95.1% on HumanEval, while zero-shot GPT-4 scored 67.0%. The Agentic Spectrum Chat: No tools or autonomy. Chat + Tools: Human-driven web search or code execution. Workflows: LLMs used in predefined code paths. The human designs the flow, the AI adds intelligence at specific nodes. Agents: LLMs dynamically choose their own path and tools based on observations. Tool Categories and Market Players Developer Frameworks: Use LangGraph for complex, stateful graphs or CrewAI for role-based multi-agent delegation. OpenAI Agents SDK provides minimalist primitives (Handoffs, Sessions), while the Claude Agent SDK focuses on local computer interaction. Workflow Automation: n8n and Zapier provide low-code interfaces. These are stable for repeatable business tasks but limited by fixed paths and a lack of persistent memory between runs. Coding Agents: Claude Code, Cursor, and GitHub Copilot are the most advanced agents. They succeed because code provides an unambiguous feedback loop (pass/fail) for the ReACT cycle. Desktop and Browser Agents: Claude Cowork( (released Jan 2026) operates in isolated VMs to produce documents. ChatGPT Atlas is a Chromium-based browser with integrated agent capabilities for web tasks. Autonomous Agents: open claw is an open-source, local system with broad permissions across messaging, file systems, and hardware. While powerful, it carries high security risks, including 512 identified vulnerabilities and potential data exfiltration. Infrastructure and Standards MCP (Model Context Protocol): A universal standard for connecting agents to tools. It has 10,000+ servers and is used by Anthropic, OpenAI, and Google. Future Outlook: By 2028, multi-agent coordination will be the default architecture. Gartner predicts 38% of organizations will utilize AI agents as formal team members, and the developer role will transition primarily to objective specification and output evaluation.

AIA Podcast
Moltомания продолжается, новые GPT-5.3 Codex, Opus 4.6 и GLM 5, AI Safety Report 2026 / ПНВ #403

AIA Podcast

Play Episode Listen Later Feb 16, 2026 140:25


Сегодня говорим про взлёт соцсети для ботов Moltbook, где ИИ жалуются на хозяев и создают свои религии, про масштабую экспансию ИИ-инфраструктуры в космос, анонсированную Илоном Маском. Codex 5.3 и Opus 4.6, GLM 5, Qween Coder Next, продажа домена AI.com за 70 млн долларов, «умным» симуляторам Waymo и отчёт о будущем ИИ в 2026 году.

AWS for Software Companies Podcast
Ep193: The Conductor Behind Your Data Orchestra: Astronomer's Approach to AI Pipeline Management

AWS for Software Companies Podcast

Play Episode Listen Later Feb 10, 2026 17:01


Astronomer's Steven Hillion reveals how OpenAI, Anthropic, Uber, and Lyft use Apache Airflow to orchestrate AI and machine learning pipelines at scale on AWS.Topics Include:Steven Hillion leads data and AI at AstronomerApache Airflow surpassed Spark and Kafka in community metricsAstronomer coordinates data flow like conductor orchestrating instrumental platformsOrganizations with data engineering teams use Airflow at scaleCustomers already used Airflow for ML before official promotionUber and Lyft orchestrate pricing models using AirflowAstronomer runs on AWS with close integration partnershipsOpenAI Anthropic and GitHub Copilot use Airflow for operationsInternal data team uses Airflow creating feedback loopsEvolved from constrained AI reports to agentic workflowsPlatform monitors generative AI output quality at user interactionsMetadata and context increasingly critical for AI applicationsLearn more at Astronomer's Data FlowCast podcastParticipants:Steven Hillion – SVP, Data and AI, AstronomerSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

Patoarchitekci
Nasz "Vibe working" 2026

Patoarchitekci

Play Episode Listen Later Jan 30, 2026 26:10


“Czasami czuję się jak nadzorca na plantacji bawełny, przysięgam.” Szymon otwiera odcinek o vibe working 2026 najlepszą metaforą zarządzania AI agents - bo od ostatniego odcinka o vibe codingu “prawie wszystko się zmieniło”. Claude Code zastąpił GitHub Copilota, a MCP działa minimalnie ale skutecznie.

Coder Radio
640: The Modern .Net Shows' Jamie Taylor

Coder Radio

Play Episode Listen Later Jan 29, 2026 43:16


Jamie's Links: https://github.com/github/spec-kit https://owasp.org/ https://bsky.app/profile/gaprogman.com https://dotnetcore.show/ https://gaprogman.github.io/OwaspHeaders.Core/ Mike on LinkedIn Coder Radio on Discord Mike's Oryx Review Alice Alice Jumpstart Offer

Dev Interrupted
Scaffolding is coping not scaling, and other lessons from Codex | OpenAI's Thibault Sottiaux

Dev Interrupted

Play Episode Listen Later Jan 27, 2026 40:24


If you rely on complex scaffolding to build AI agents you aren't scaling you are coping. Thibault Sottiaux from OpenAI's Codex team joins us to explain why they are ruthlessly removing the harness to solve for true agentic autonomy. We discuss the bitter lesson of vertical integration, why scalable primitives beat clever tricks, and how the rise of the super bus factor is reshaping engineering careers.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest:OpenAI Codex: Learn more about the models powering tools like GitHub Copilot.Codex Open Source Repo: The lightweight coding agent that runs in your terminal (check out the Rust migration mentioned in the episode).Agent Skills Open Standard: The open standard and catalog for giving agents new capabilities.The Bitter Lesson: Richard Sutton's essay on why compute-centric methods win in AI.Follow Tibo on X @thsottiaux | GitHubOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Merge Conflict
499: Going Full Ralph, CLI, & GitHub Copilot SDK?!?!

Merge Conflict

Play Episode Listen Later Jan 26, 2026 48:41


In episode 499 James and Frank dive into the messy, exciting world of coding agents — from burning through Copilot credits and avoiding merge conflicts to practical workflows for letting agents run tasks while you sleep. They share real tips: break big features into bite-sized tasks, have agents ask clarifying questions, and use Copilot CLI or the new SDK to resolve conflicts, auto-fix lint/build failures, and automate mundane repo work. The conversation then maps the evolution from simple completions to autonomous loops like Ralph — a structured, repeatable process that generates subtasks, runs until acceptance tests pass, and updates your workflow. If you're curious how agents, MCPs and SDKs can elevate your dev flow or spark new automations, this episode gives pragmatic examples, trade-offs, and inspiration to start experimenting today. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us ⭐⭐ Machine transcription available on http://mergeconflict.fm

blog chat copilot sdks github copilot mcps james montemagno frank krueger
Dev Interrupted
Angie Jones on Ralphing 25k repos at Block, GPT-5.2 Codex, and CES weirdness

Dev Interrupted

Play Episode Listen Later Jan 23, 2026 28:01


With the Ralph loop going mainstream, how are engineering organizations utilizing it at scale? Andrew and Ben sit down with Angie Jones, VP of Engineering AI Tools and Enablement at Block, to pick her brain on how they are using the Ralph Wiggum technique to automate updates across 25,000 repos and how she is strategically preparing for Gas Town. The team also breaks down the launch of OpenAI's new GPT-5.2 Codex model before closing out the week with a look at the weirdest tech from CES, from hypersonic knives to music-playing lollipops.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's stories:Angie Jones: angiejones.tech | LinkedIn | X (Twitter)Goose (Block's AI Agent): github.com/block/gooseSteve Yegge's "Welcome to Gas Town": Read on MediumGeoffrey Huntley's Ralph Loop: ghuntley.com/ralphRyan Dahl on the End of Coding: @rough__seaThe Weirdest Tech of CES: Read the ArticleOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Vanishing Gradients
Episode 68: A Builder's Guide to Agentic Search & Retrieval with Doug Turnbull & John Berryman

Vanishing Gradients

Play Episode Listen Later Jan 23, 2026 88:42


The best way to build a horrible search product? Don't ever measure anything against what a user wants.Search veterans Doug Turnbull (Led Search at Reddit + Shopify; Wrote Relevant Search + AI Powered Search) and John Berryman (Early Engineer on Github Copilot; Author of Relevant Search + Prompt Engineering for LLMs), join Hugo to talk about how to build Agentic Search Applications.We Discuss:* The evolution of information retrieval as it moves from traditional keyword search toward “agentic search“ and what this means for builders.* John's five-level maturity model (you can prototype today!) for AI adoption, moving from Trad Search to conversational AI to asynchronous research assistants that reason about result quality.* The Agentic Search Builders Playbook, including why and how you should “hand-roll” your own agentic loops to maintain control;* The importance of “revealed preferences” that LLM-judges often miss (evaluations must use real clickstream data to capture “revealed preferences” that semantic relevance alone cannot infer)* Patterns and Anti-Patterns for Agentic Search Applications* Learning and teaching Search in the Age of AgentsYou can find the full episode on Spotify, Apple Podcasts, and YouTube.You can also interact directly with the transcript here in NotebookLM: If you do so, let us know anything you find in the comments!

The Ravit Show
How SREs are Leveraging AI: Coding Agents and the Future of Shell Scripting

The Ravit Show

Play Episode Listen Later Jan 23, 2026 7:53


The future of reliability is not one tool. It is a team of agents working together. At AWS re:Invent, I had a chat with Francois Martel, Field CTO at NeuBird.ai, to talk about how AI is changing the way developers and SREs handle reliability in the real world.Here are the key takeaways from our conversation-- Coding agents are becoming the front door to AITools like GitHub Copilot and Cursor are getting massive adoption. When paired with NeuBird's Hawkeye agentic SRE server, these agents can jump straight into root cause analysis and even take action to remediate issues-- SREs are a natural fit for agentsSREs already live in the command line and think in scripts. Coding agents are an easy and practical entry point for bringing AI into day to day SRE workflows-- Agent adoption is speeding upWe are past experimentation. Customers are seeing value from early use cases, which is pushing broader and faster adoption of agent based systems-- Enterprise security still mattersFor larger organizations, NeuBird can deploy the agent inside the customer's VPC. The data stays in their environment and the full data path remains under their control-- AWS partnership momentumNeuBird is launching a pay as you go offering on the AWS Marketplace. This makes it one of the first agentic SRE servers you can try without long term commitment and connect to tools like AWS, Datadog, Dynatrace, and GrafanaIf you want to see how agentic SRE works in practice, you can start with the pay as you go option or the two week free trial and pairing it with your favorite coding agent.It was great catching up with François again and seeing how NeuBird is pushing the agentic SRE space forward.#data #ai #awsreinvent #aws #agents #awspartners #copilot #agents #theravitshow

Maintainable
Brittany Ellich: Using AI to Maintain Software, Not Rewrite It

Maintainable

Play Episode Listen Later Jan 21, 2026 60:36


Rewrites are seductive. Clean slates promise clarity, speed, and “doing it right this time.” In practice, they're often late, over budget, and quietly demoralizing.In this episode of Maintainable, Robby sits down with Brittany Ellich, a Senior Software Engineer at GitHub, to talk about a different path. One rooted in stewardship, readability, and resisting the urge to start over.Brittany's career began with a long string of rebuild projects. Over time, she noticed a pattern. The estimates were wrong. Feature development stalled. Teams burned energy reaching parity with systems they'd already had. That experience pushed her toward a strong belief: if software is in production and serving users, it's usually worth maintaining.[00:00:57] What well-maintained software actually looks likeFor Brittany, readability is the first signal. If code can't be understood, it can't be changed safely. Maintenance begins with making systems approachable for the next person.[00:01:42] Rethinking technical debtShe explains how her understanding of technical debt has evolved. Rather than a fixed category of work, it's often anything that doesn't map directly to new features. Bugs, reliability issues, and long-term risks frequently get lumped together, making prioritization harder than it needs to be.[00:05:49] Why AI changes the maintenance equationBrittany describes how coding agents have made it easier to tackle small, previously ignored maintenance tasks. Instead of waiting for debt to accumulate into massive projects, teams can chip away incrementally. (Related: GitHub Copilot and the Copilot coding agent workflow she's explored.)[00:07:16] Context from GitHub's billing systemsWorking on metered billing at GitHub means correctness and reliability matter more than flash. Billing should be boring. When it's not, customers notice quickly.[00:11:43] Navigating a multi-era codebaseGitHub's original Rails codebase is still in active use. Brittany relies heavily on Git blame and old pull requests to understand why decisions were made, treating them as a form of living documentation.[00:25:27] Treating coding agents like teammatesRather than delegating massive changes, Brittany assigns agents small, well-scoped tasks. She approaches them the same way she would a new engineer: clear instructions, limited scope, and careful review.[00:36:00] Structuring the day to avoid cognitive overloadShe breaks agent interaction into focused windows, checking in a few times a day instead of constantly monitoring progress. This keeps deep work intact while still moving maintenance forward.[00:40:24] Low-risk ways to experimentImproving test coverage and generating repository instructions are safe entry points. These changes add value without risking production behavior.[00:54:10] Navigating team resistance and ethicsBrittany acknowledges skepticism around AI and encourages teams to start with existing backlog problems rather than selling AI as a feature factory.[00:57:57] Books, habits, and staying balancedOutside of software, Brittany recommends Atomic Habits by James Clear, sharing how small routines help her stay focused.The takeaway is clear. AI doesn't replace engineering judgment. Used thoughtfully, it can support the unglamorous work that keeps software alive.Good software doesn't need a rewrite.It needs caretakers.References MentionedGitHub – Brittany's current role and the primary environment discussedGitHub Universe – Where Brittany presented her coding agent workflowAtomic Habits by James Clear – Brittany's recommended book outside of techOvercommitted - Podcast Brittany co-hostsThe Balanced Engineer Newsletter – Brittany's monthly newsletter on engineering, leadership, and balanceBrittany Ellich's website – Central hub for her writing and linksGitHub Copilot – The AI tooling discussed throughout the episodeHow the GitHub billing team uses the coding agent in GitHub Copilot to continuously burn down technical debt – GitHub blog post referencedThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error-tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and other frameworks.It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications.Keep your coding cool and error-free, one line at a time! Use the code maintainable to get a 10% discount for your first year. Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Dev Interrupted
Backstage's journey from spreadsheets to global IDP standard | Spotify's Tyson Singer

Dev Interrupted

Play Episode Listen Later Jan 20, 2026 40:49


Before Backstage became the industry standard for developer portals, Spotify's engineers relied on spreadsheets to navigate their massive microservices ecosystem.Tyson Singer, Spotify's Head of Technology and Platforms, joins us to trace the evolution of their internal developer experience from a necessity for order into the open-source giant Backstage and its new SaaS evolution, Portal. We dig into how they use golden paths to align autonomous squads and how their new AI Knowledge Assistant (AiKA) reduced internal support tickets by nearly 50% while protecting developer flow. Finally, Tyson shares his philosophy on sustainable innovation, explaining how to train an engineering organization to run a marathon at a sprinter's pace.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's stories:Spotify Engineering Blog: engineering.spotify.comSpotify Portal: backstage.spotify.comConfidence: confidence.spotify.comConnect with Tyson: LinkedInOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Gradient Dissent - A Machine Learning Podcast by W&B
What a $42B Software Co. Really Spends on AI Tools

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Jan 20, 2026 67:46


“I don't worry about being replaced by AI. I worry about being replaced by someone who's really good at using AI.”Atlassian has 10,000+ engineers currently split-testing the world's top AI coding tools, from GitHub Copilot and Cursor to Claude Code. In this episode, Co-Founder & CEO Mike Cannon-Brookes joins Lukas Biewald to share what their data reveals about the world's best AI tools today.Hear how 24 years of building a tech giant and a massive internal study on AI productivity have shaped Mike's vision for the future of dev jobs.Connect with us here:Mike Cannon-Brookes: https://www.linkedin.com/in/mcannonbrookes/?originalSubdomain=auAtlassian: https://www.linkedin.com/company/atlassian/?viewAsMember=trueLukas Biewald: https://www.linkedin.com/in/lbiewald/ Weights & Biases: https://www.linkedin.com/company/wandb/00:00 Trailer01:08 Introduction03:11 Connecting Technology and Business Teams07:22 The Impact of AI on Business Workflows13:26 Developer Productivity and AI21:03 Measuring Developer Efficiency25:41 Future of AI in Development34:59 Legacy Technology and Code Changes39:29 AI's Role in Developer Productivity47:40 AI and Junior Developers52:30 Product-Led Growth and Business Strategy01:00:29 Core Metrics for Sustainable Growth01:06:56 Staying Creative in the Tech Industry

HTML All The Things - Web Development, Web Design, Small Business
How Open Source Makes Money? (Tailwind CSS Debacle)

HTML All The Things - Web Development, Web Design, Small Business

Play Episode Listen Later Jan 17, 2026 24:11


Despite Tailwind CSS usage continuing to grow, the company recently revealed a sharp revenue decline tied to the rise of AI coding tools. Founder Adam Wathan explained how tools like GitHub Copilot and ChatGPT reduced documentation traffic, cutting off Tailwind's primary revenue funnel. In this edition of Web News, Matt and Mike explore what this means for Tailwind, the broader open-source ecosystem, and how open-source projects actually make money in 2026. Show Notes: https://www.htmlallthethings.com/podcast/how-open-source-makes-money-tailwind-css-debacle

Dev Interrupted
Ralph Wiggum goes to Gas Town and the death of the IC

Dev Interrupted

Play Episode Listen Later Jan 16, 2026 28:27


In our first-ever Friday edition, Andrew and Ben dive into the viral "Ralph Loop" phenomenon and discuss how simple bash loops and deterministic context allocation are changing the unit economics of code. They also explore Steve Yegge's chaotic "Gas Town" concept for orchestrating AI agents, debate whether AI is killing the individual contributor role, and share a laugh over a creepy link generator that challenges our trust in URLs. LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's stories:Cowork: Claude Code for the rest of your workGas Town Emergency User ManualLoom by Geoffrey HuntleyAI Killed the Individual ContributorCreepyLinkOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Les Cast Codeurs Podcast
LCC 335 - 200 terminaux en prod vendredi

Les Cast Codeurs Podcast

Play Episode Listen Later Jan 16, 2026 103:16


De retour à cinq dans l'épisode, les cast codeurs démarrent cette année avec un gros épisode pleins de news et d'articles de fond. IA bien sûr, son impact sur les pratiques, Mockito qui tourne un page, du CSS (et oui), sur le (non) mapping d'APIs REST en MCP et d'une palanquée d'outils pour vous. Enregistré le 9 janvier 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-335.mp3 ou en vidéo sur YouTube. News Langages 2026 sera-t'elle l'année de Java dans le terminal ? (j'ai ouïe dire que ça se pourrait bien…) https://xam.dk/blog/lets-make-2026-the-year-of-java-in-the-terminal/ 2026: Année de Java dans le terminal, pour rattraper son retard sur Python, Rust, Go et Node.js. Java est sous-estimé pour les applications CLI et les TUIs (interfaces utilisateur terminales) malgré ses capacités. Les anciennes excuses (démarrage lent, outillage lourd, verbosité, distribution complexe) sont obsolètes grâce aux avancées récentes : GraalVM Native Image pour un démarrage en millisecondes. JBang pour l'exécution simplifiée de scripts Java (fichiers uniques, dépendances) et de JARs. JReleaser pour l'automatisation de la distribution multi-plateforme (Homebrew, SDKMAN, Docker, images natives). Project Loom pour la concurrence facile avec les threads virtuels. PicoCLI pour la gestion des arguments. Le potentiel va au-delà des scripts : création de TUIs complètes et esthétiques (ex: dashboards, gestionnaires de fichiers, assistants IA). Excuses caduques : démarrage rapide (GraalVM), légèreté (JBang), distribution simple (JReleaser), concurrence (Loom). Potentiel : créer des applications TUI riches et esthétiques. Sortie de Ruby 4.0.0 https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/ Ruby Box (expérimental) : Une nouvelle fonctionnalité permettant d'isoler les définitions (classes, modules, monkey patches) dans des boîtes séparées pour éviter les conflits globaux. ZJIT : Un nouveau compilateur JIT de nouvelle génération développé en Rust, visant à surpasser YJIT à terme (actuellement en phase expérimentale). Améliorations de Ractor : Introduction de Ractor::Port pour une meilleure communication entre Ractors et optimisation des structures internes pour réduire les contentions de verrou global. Changements syntaxiques : Les opérateurs logiques (||, &&, and, or) en début de ligne permettent désormais de continuer la ligne précédente, facilitant le style "fluent". Classes Core : Set et Pathname deviennent des classes intégrées (Core) au lieu d'être dans la bibliothèque standard. Diagnostics améliorés : Les erreurs d'arguments (ArgumentError) affichent désormais des extraits de code pour l'appelant ET la définition de la méthode. Performances : Optimisation de Class#new, accès plus rapide aux variables d'instance et améliorations significatives du ramasse-miettes (GC). Nettoyage : Suppression de comportements obsolètes (comme la création de processus via IO.open avec |) et mise à jour vers Unicode 17.0. Librairies Introduction pour créer une appli multi-tenant avec Quarkus et http://nip.io|nip.io https://www.the-main-thread.com/p/quarkus-multi-tenant-api-nipio-tutorial Construction d'une API REST multi-tenant en Quarkus avec isolation par sous-domaine Utilisation de http://nip.io|nip.io pour la résolution DNS automatique sans configuration locale Extraction du tenant depuis l'en-tête HTTP Host via un filtre JAX-RS Contexte tenant géré avec CDI en scope Request pour l'isolation des données Service applicatif gérant des données spécifiques par tenant avec Map concurrent Interface web HTML/JS pour visualiser et ajouter des données par tenant Configuration CORS nécessaire pour le développement local Pattern acme.127-0-0-1.nip.io résolu automatiquement vers localhost Code complet disponible sur GitHub avec exemples curl et tests navigateur Base idéale pour prototypage SaaS, tests multi-tenants Hibernate 7.2 avec quelques améliorations intéressantes https://docs.hibernate.org/orm/7.2/whats-new/%7Bhtml-meta-canonical-link%7D read only replica (experimental), crée deux session factories et swap au niveau jdbc si le driver le supporte et custom sinon. On ouvre une session en read only child statelesssession (partage le contexte transactionnel) hibernate vector module ajouter binary, float16 and sparse vectors Le SchemaManager peut resynchroniser les séquences par rapport aux données des tables Regexp dans HQL avec like Nouvelle version de Hibernate with Panache pour Quarkus https://quarkus.io/blog/hibernate-panache-next/ Nouvelle extension expérimentale qui unifie Hibernate ORM with Panache et Hibernate Reactive with Panache Les entités peuvent désormais fonctionner en mode bloquant ou réactif sans changer de type de base Support des sessions sans état (StatelessSession) en plus des entités gérées traditionnelles Intégration de Jakarta Data pour des requêtes type-safe vérifiées à la compilation Les opérations sont définies dans des repositories imbriqués plutôt que des méthodes statiques Possibilité de définir plusieurs repositories pour différents modes d'opération sur une même entité Accès aux différents modes (bloquant/réactif, géré/sans état) via des méthodes de supertype Support des annotations @Find et @HQL pour générer des requêtes type-safe Accès au repository via injection ou via le métamodèle généré Extension disponible dans la branche main, feedback demandé sur Zulip ou GitHub Spring Shell 4.0.0 GA publié - https://spring.io/blog/2025/12/30/spring-shell-4-0-0-ga-released Sortie de la version finale de Spring Shell 4.0.0 disponible sur Maven Central Compatible avec les dernières versions de Spring Framework et Spring Boot Modèle de commandes revu pour simplifier la création d'applications CLI interactives Intégration de jSpecify pour améliorer la sécurité contre les NullPointerException Architecture plus modulaire permettant meilleure personnalisation et extension Documentation et exemples entièrement mis à jour pour faciliter la prise en main Guide de migration vers la v4 disponible sur le wiki du projet Corrections de bugs pour améliorer la stabilité et la fiabilité Permet de créer des applications Java autonomes exécutables avec java -jar ou GraalVM native Approche opinionnée du développement CLI tout en restant flexible pour les besoins spécifiques Une nouvelle version de la librairie qui implémenter des gatherers supplémentaires à ceux du JDK https://github.com/tginsberg/gatherers4j/releases/tag/v0.13.0 gatherers4j v0.13.0. Nouveaux gatherers : uniquelyOccurringBy(), moving/runningMedian(), moving/runningMax/Min(). Changement : les gatherers "moving" incluent désormais par défaut les valeurs partielles (utiliser excludePartialValues() pour désactiver). LangChain4j 1.10.0 https://github.com/langchain4j/langchain4j/releases/tag/1.10.0 Introduction d'un catalogue de modèles pour Anthropic, Gemini, OpenAI et Mistral. Ajout de capacités d'observabilité et de monitoring pour les agents. Support des sorties structurées, des outils avancés et de l'analyse de PDF via URL pour Anthropic. Support des services de transcription pour OpenAI. Possibilité de passer des paramètres de configuration de chat en argument des méthodes. Nouveau garde-fou de modération pour les messages entrants. Support du contenu de raisonnement pour les modèles. Introduction de la recherche hybride. Améliorations du client MCP. Départ du lead de mockito après 10 ans https://github.com/mockito/mockito/issues/3777 Tim van der Lippe, mainteneur majeur de Mockito, annonce son départ pour mars 2026, marquant une décennie de contribution au projet. L'une des raisons principales est l'épuisement lié aux changements récents dans la JVM (JVM 22+) concernant les agents, imposant des contraintes techniques lourdes sans alternative simple proposée par les mainteneurs du JDK. Il pointe du doigt le manque de soutien et la pression exercée sur les bénévoles de l'open source lors de ces transitions technologiques majeures. La complexité croissante pour supporter Kotlin, qui utilise la JVM de manière spécifique, rend la base de code de Mockito plus difficile à maintenir et moins agréable à faire évoluer selon lui. Il exprime une perte de plaisir et préfère désormais consacrer son temps libre à d'autres projets comme Servo, un moteur web écrit en Rust. Une période de transition est prévue jusqu'en mars pour assurer la passation de la maintenance à de nouveaux contributeurs. Infrastructure Le premier intérêt de Kubernetes n'est pas le scaling - https://mcorbin.fr/posts/2025-12-29-kubernetes-scale/ Avant Kubernetes, gérer des applications en production nécessitait de multiples outils complexes (Ansible, Puppet, Chef) avec beaucoup de configuration manuelle Le load balancing se faisait avec HAProxy et Keepalived en actif/passif, nécessitant des mises à jour manuelles de configuration à chaque changement d'instance Le service discovery et les rollouts étaient orchestrés manuellement, instance par instance, sans automatisation de la réconciliation Chaque stack (Java, Python, Ruby) avait sa propre méthode de déploiement, sans standardisation (rpm, deb, tar.gz, jar) La gestion des ressources était manuelle avec souvent une application par machine, créant du gaspillage et complexifiant la maintenance Kubernetes standardise tout en quelques ressources YAML (Deployment, Service, Ingress, ConfigMap, Secret) avec un format déclaratif simple Toutes les fonctionnalités critiques sont intégrées : service discovery, load balancing, scaling, stockage, firewalling, logging, tolérance aux pannes La complexité des centaines de scripts shell et playbooks Ansible maintenus avant était supérieure à celle de Kubernetes Kubernetes devient pertinent dès qu'on commence à reconstruire manuellement ces fonctionnalités, ce qui arrive très rapidement La technologie est flexible et peut gérer aussi bien des applications modernes que des monolithes legacy avec des contraintes spécifiques Mole https://github.com/tw93/Mole Un outil en ligne de commande (CLI) tout-en-un pour nettoyer et optimiser macOS. Combine les fonctionnalités de logiciels populaires comme CleanMyMac, AppCleaner, DaisyDisk et iStat Menus. Analyse et supprime en profondeur les caches, les fichiers logs et les résidus de navigateurs. Désinstallateur intelligent qui retire proprement les applications et leurs fichiers cachés (Launch Agents, préférences). Analyseur d'espace disque interactif pour visualiser l'occupation des fichiers et gérer les documents volumineux. Tableau de bord temps réel (mo status) pour surveiller le CPU, le GPU, la mémoire et le réseau. Fonction de purge spécifique pour les développeurs permettant de supprimer les artefacts de build (node_modules, target, etc.). Intégration possible avec Raycast ou Alfred pour un lancement rapide des commandes. Installation simple via Homebrew ou un script curl. Des images Docker sécurisées pour chaque développeur https://www.docker.com/blog/docker-hardened-images-for-every-developer/ Docker rend ses "Hardened Images" (DHI) gratuites et open source (licence Apache 2.0) pour tous les développeurs. Ces images sont conçues pour être minimales, prêtes pour la production et sécurisées dès le départ afin de lutter contre l'explosion des attaques sur la chaîne logistique logicielle. Elles s'appuient sur des bases familières comme Alpine et Debian, garantissant une compatibilité élevée et une migration facile. Chaque image inclut un SBOM (Software Bill of Materials) complet et vérifiable, ainsi qu'une provenance SLSA de niveau 3 pour une transparence totale. L'utilisation de ces images permet de réduire considérablement le nombre de vulnérabilités (CVE) et la taille des images (jusqu'à 95 % plus petites). Docker étend cette approche sécurisée aux graphiques Helm et aux serveurs MCP (Mongo, Grafana, GitHub, etc.). Des offres commerciales (DHI Enterprise) restent disponibles pour des besoins spécifiques : correctifs critiques sous 7 jours, support FIPS/FedRAMP ou support à cycle de vie étendu (ELS). Un assistant IA expérimental de Docker peut analyser les conteneurs existants pour recommander l'adoption des versions sécurisées correspondantes. L'initiative est soutenue par des partenaires majeurs tels que Google, MongoDB, Snyk et la CNCF. Web La maçonnerie ("masonry") arrive dans la spécification des CSS et commence à être implémentée par les navigateurs https://webkit.org/blog/17660/introducing-css-grid-lanes/ Permet de mettre en colonne des éléments HTML les uns à la suite des autres. D'abord sur la première ligne, et quand la première ligne est remplie, le prochain élément se trouvera dans la colonne où il pourra être le plus haut possible, et ainsi de suite. après la plomberie du middleware, la maçonnerie du front :laughing: Data et Intelligence Artificielle On ne devrait pas faire un mapping 1:1 entre API REST et MCP https://nordicapis.com/why-mcp-shouldnt-wrap-an-api-one-to-one/ Problématique : Envelopper une API telle quelle dans le protocole MCP (Model Context Protocol) est un anti-pattern. Objectif du MCP : Conçu pour les agents d'IA, il doit servir d'interface d'intention, non de miroir d'API. Les agents comprennent les tâches, pas la logique complexe des API (authentification, pagination, orchestration). Conséquences du mappage un-à-un : Confusion des agents, erreurs, hallucinations. Difficulté à gérer les orchestrations complexes (plusieurs appels pour une seule action). Exposition des faiblesses de l'API (schéma lourd, endpoints obsolètes). Maintenance accrue lors des changements d'API. Meilleure approche : Construire des outils MCP comme des SDK pour agents, encapsulant la logique nécessaire pour accomplir une tâche spécifique. Pratiques recommandées : Concevoir autour des intentions/actions utilisateur (ex. : "créer un projet", "résumer un document"). Regrouper les appels en workflows ou actions uniques. Utiliser un langage naturel pour les définitions et les noms. Limiter la surface d'exposition de l'API pour la sécurité et la clarté. Appliquer des schémas d'entrée/sortie stricts pour guider l'agent et réduire l'ambiguïté. Des agents en production avec AWS - https://blog.ippon.fr/2025/12/22/des-agents-en-production-avec-aws/ AWS re:Invent 2025 a massivement mis en avant l'IA générative et les agents IA Un agent IA combine un LLM, une boucle d'appel et des outils invocables Strands Agents SDK facilite le prototypage avec boucles ReAct intégrées et gestion de la mémoire Managed MLflow permet de tracer les expérimentations et définir des métriques de performance Nova Forge optimise les modèles par réentraînement sur données spécifiques pour réduire coûts et latence Bedrock Agent Core industrialise le déploiement avec runtime serverless et auto-scaling Agent Core propose neuf piliers dont observabilité, authentification, code interpreter et browser managé Le protocole MCP d'Anthropic standardise la fourniture d'outils aux agents SageMaker AI et Bedrock centralisent l'accès aux modèles closed source et open source via API unique AWS mise sur l'évolution des chatbots vers des systèmes agentiques optimisés avec modèles plus frugaux Debezium 3.4 amène plusieurs améliorations intéressantes https://debezium.io/blog/2025/12/16/debezium-3-4-final-released/ Correction du problème de calcul du low watermark Oracle qui causait des pertes de performance Correction de l'émission des événements heartbeat dans le connecteur Oracle avec les requêtes CTE Amélioration des logs pour comprendre les transactions actives dans le connecteur Oracle Memory guards pour protéger contre les schémas de base de données de grande taille Support de la transformation des coordonnées géométriques pour une meilleure gestion des données spatiales Extension Quarkus DevServices permettant de démarrer automatiquement une base de données et Debezium en dev Intégration OpenLineage pour tracer la lignée des données et suivre leur flux à travers les pipelines Compatibilité testée avec Kafka Connect 4.1 et Kafka brokers 4.1 Infinispan 16.0.4 et .5 https://infinispan.org/blog/2025/12/17/infinispan-16-0-4 Spring Boot 4 et Spring 7 supportés Evolution dans les metriques Deux bugs de serialisation Construire un agent de recherche en Java avec l'API Interactions https://glaforge.dev/posts/2026/01/03/building-a-research-assistant-with-the-interactions-api-in-java/ Assistant de recherche IA Java (API Interactions Gemini), test du SDK implémenté par Guillaume. Workflow en 4 phases : Planification : Gemini Flash + Google Search. Recherche : Modèle "Deep Research" (tâche de fond). Synthèse : Gemini Pro (rapport exécutif). Infographie : Nano Banana Pro (à partir de la synthèse). API Interactions : gestion d'état serveur, tâches en arrière-plan, réponses multimodales (images). Appréciation : gestion d'état de l'API (vs LLM sans état). Validation : efficacité du SDK Java pour cas complexes. Stephan Janssen (le papa de Devoxx) a créé un serveur MCP (Model Context Protocol) basé sur LSP (Language Server Protocol) pour que les assistants de code analysent le code en le comprenant vraiment plutôt qu'en faisant des grep https://github.com/stephanj/LSP4J-MCP Le problème identifié : Les assistants IA utilisent souvent la recherche textuelle (type grep) pour naviguer dans le code, ce qui manque de contexte sémantique, génère du bruit (faux positifs) et consomme énormément de tokens inutilement. La solution LSP4J-MCP : Une approche "standalone" (autonome) qui encapsule le serveur de langage Eclipse (JDTLS) via le protocole MCP (Model Context Protocol). Avantage principal : Offre une compréhension sémantique profonde du code Java (types, hiérarchies, références) sans nécessiter l'ouverture d'un IDE lourd comme IntelliJ. Comparaison des méthodes : AST : Trop léger (pas de compréhension inter-fichiers). IntelliJ MCP : Puissant mais exige que l'IDE soit ouvert (gourmand en ressources). LSP4J-MCP : Le meilleur des deux mondes pour les workflows en terminal, à distance (SSH) ou CI/CD. Fonctionnalités clés : Expose 5 outils pour l'IA (find_symbols, find_references, find_definition, document_symbols, find_interfaces_with_method). Résultats : Une réduction de 100x des tokens utilisés pour la navigation et une précision accrue (distinction des surcharges, des scopes, etc.). Disponibilité : Le projet est open source et disponible sur GitHub pour intégration immédiate (ex: avec Claude Code, Gemini CLI, etc). A noter l'ajout dans claude code 2.0.74 d'un tool pour supporter LSP ( https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2074 ) Awesome (GitHub) Copilot https://github.com/github/awesome-copilot Une collection communautaire d'instructions, de prompts et de configurations pour optimiser l'utilisation de GitHub Copilot. Propose des "Agents" spécialisés qui s'intègrent aux serveurs MCP pour améliorer les flux de travail spécifiques. Inclut des prompts ciblés pour la génération de code, la documentation et la résolution de problèmes complexes. Fournit des instructions détaillées sur les standards de codage et les meilleures pratiques applicables à divers frameworks. Propose des "Skills" (compétences) sous forme de dossiers contenant des ressources pour des tâches techniques spécialisées. (les skills sont dispo dans copilot depuis un mois : https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/ ) Permet une installation facile via un serveur MCP dédié, compatible avec VS Code et Visual Studio. Encourage la contribution communautaire pour enrichir les bibliothèques de prompts et d'agents. Aide à augmenter la productivité en offrant des solutions pré-configurées pour de nombreux langages et domaines. Garanti par une licence MIT et maintenu activement par des contributeurs du monde entier. IA et productivité : bilan de l'année 2025 (Laura Tacho - DX)) https://newsletter.getdx.com/p/ai-and-productivity-year-in-review?aid=recNfypKAanQrKszT En 2025, l'ingénierie assistée par l'IA est devenue la norme : environ 90 % des développeurs utilisent des outils d'IA mensuellement, et plus de 40 % quotidiennement. Les chercheurs (Microsoft, Google, GitHub) soulignent que le nombre de lignes de code (LOC) reste un mauvais indicateur d'impact, car l'IA génère beaucoup de code sans forcément garantir une valeur métier supérieure. Si l'IA améliore l'efficacité individuelle, elle pourrait nuire à la collaboration à long terme, car les développeurs passent plus de temps à "parler" à l'IA qu'à leurs collègues. L'identité du développeur évolue : il passe de "producteur de code" à un rôle de "metteur en scène" qui délègue, valide et exerce son jugement stratégique. L'IA pourrait accélérer la montée en compétences des développeurs juniors en les forçant à gérer des projets et à déléguer plus tôt, agissant comme un "accélérateur" plutôt que de les rendre obsolètes. L'accent est mis sur la créativité plutôt que sur la simple automatisation, afin de réimaginer la manière de travailler et d'obtenir des résultats plus impactants. Le succès en 2026 dépendra de la capacité des entreprises à cibler les goulots d'étranglement réels (dette technique, documentation, conformité) plutôt que de tester simplement chaque nouveau modèle d'IA. La newsletter avertit que les titres de presse simplifient souvent à l'excès les recherches sur l'IA, masquant parfois les nuances cruciales des études réelles. Un développeur décrit dans un article sur Twitter son utilisation avancée de Claude Code pour le développement, avec des sous-agents, des slash-commands, comment optimiser le contexte, etc. https://x.com/AureaLibe/status/2008958120878330329?s=20 Outillage IntelliJ IDEA, thread dumps et project Loom (virtual threads) - https://blog.jetbrains.com/idea/2025/12/thread-dumps-and-project-loom-virtual-threads/ Les virtual threads Java améliorent l'utilisation du matériel pour les opérations I/O parallèles avec peu de changements de code Un serveur peut maintenant gérer des millions de threads au lieu de quelques centaines Les outils existants peinent à afficher et analyser des millions de threads simultanément Le débogage asynchrone est complexe car le scheduler et le worker s'exécutent dans des threads différents Les thread dumps restent essentiels pour diagnostiquer deadlocks, UI bloquées et fuites de threads Netflix a découvert un deadlock lié aux virtual threads en analysant un heap dump, bug corrigé dans Java 25. Mais c'était de la haute voltige IntelliJ IDEA supporte nativement les virtual threads dès leur sortie avec affichage des locks acquis IntelliJ IDEA peut ouvrir des thread dumps générés par d'autres outils comme jcmd Le support s'étend aussi aux coroutines Kotlin en plus des virtual threads Quelques infos sur IntelliJ IDEA 2025.3 https://blog.jetbrains.com/idea/2025/12/intellij-idea-2025-3/ Distribution unifiée regroupant davantage de fonctionnalités gratuites Amélioration de la complétion des commandes dans l'IDE Nouvelles fonctionnalités pour le débogueur Spring Thème Islands devient le thème par défaut Support complet de Spring Boot 4 et Spring Framework 7 Compatibilité avec Java 25 Prise en charge de Spring Data JDBC et Vitest 4 Support natif de Junie et Claude Agent pour l'IA Quota d'IA transparent et option Bring Your Own Key à venir Corrections de stabilité, performance et expérience utilisateur Plein de petits outils en ligne pour le développeur https://blgardner.github.io/prism.tools/ génération de mot de passe, de gradient CSS, de QR code encodage décodage de Base64, JWT formattage de JSON, etc. resumectl - Votre CV en tant que code https://juhnny5.github.io/resumectl/ Un outil en ligne de commande (CLI) écrit en Go pour générer un CV à partir d'un fichier YAML. Permet l'exportation vers plusieurs formats : PDF, HTML, ou un affichage direct dans le terminal. Propose 5 thèmes intégrés (Modern, Classic, Minimal, Elegant, Tech) personnalisables avec des couleurs spécifiques. Fonctionnalité d'initialisation (resumectl init) permettant d'importer automatiquement des données depuis LinkedIn et GitHub (projets les plus étoilés). Supporte l'ajout de photos avec des options de filtre noir et blanc ou de forme (rond/carré). Inclut un mode "serveur" (resumectl serve) pour prévisualiser les modifications en temps réel via un navigateur local. Fonctionne comme un binaire unique sans dépendances externes complexes pour les modèles. mactop - Un moniteur "top" pour Apple Silicon https://github.com/metaspartan/mactop Un outil de surveillance en ligne de commande (TUI) conçu spécifiquement pour les puces Apple Silicon (M1, M2, M3, M4, M5). Permet de suivre en temps réel l'utilisation du CPU (E-cores et P-cores), du GPU et de l'ANE (Neural Engine). Affiche la consommation électrique (wattage) du système, du CPU, du GPU et de la DRAM. Fournit des données sur les températures du SoC, les fréquences du GPU et l'état thermique global. Surveille l'utilisation de la mémoire vive, de la swap, ainsi que l'activité réseau et disque (E/S). Propose 10 mises en page (layouts) différentes et plusieurs thèmes de couleurs personnalisables. Ne nécessite pas l'utilisation de sudo car il s'appuie sur les API natives d'Apple (SMC, IOReport, IOKit). Inclut une liste de processus détaillée (similaire à htop) avec la possibilité de tuer des processus directement depuis l'interface. Offre un mode "headless" pour exporter les métriques au format JSON et un serveur optionnel pour Prometheus. Développé en Go avec des composants en CGO et Objective-C. Adieu direnv, Bonjour misehttps://codeka.io/2025/12/19/adieu-direnv-bonjour-mise/ L'auteur remplace ses outils habituels (direnv, asdf, task, just) par un seul outil polyvalent écrit en Rust : mise. mise propose trois fonctions principales : gestionnaire de paquets (langages et outils), gestionnaire de variables d'environnement et exécuteur de tâches. Contrairement à direnv, il permet de gérer des alias et utilise un fichier de configuration structuré (mise.toml) plutôt que du scripting shell. La configuration est hiérarchique, permettant de surcharger les paramètres selon les répertoires, avec un système de "trust" pour la sécurité. Une "killer-feature" soulignée est la gestion des secrets : mise s'intègre avec age pour chiffrer des secrets (via clés SSH) directement dans le fichier de configuration. L'outil supporte une vaste liste de langages et d'outils via un registre interne et des plugins (compatibilité avec l'écosystème asdf). Il simplifie le workflow de développement en regroupant l'installation des outils et l'automatisation des tâches au sein d'un même fichier. L'auteur conclut sur la puissance, la flexibilité et les excellentes performances de l'outil après quelques heures de test. Claude Code v2.1.0 https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#210 Rechargement à chaud des "skills" : Les modifications apportées aux compétences dans ~/.claude/skills sont désormais appliquées instantanément sans redémarrer la session. Sous-agents et forks : Support de l'exécution de compétences et de commandes slash dans un contexte de sous-agent forké via context: fork. Réglages linguistiques : Ajout d'un paramètre language pour configurer la langue de réponse par défaut (ex: language: "french"). Améliorations du terminal : Shift+Enter fonctionne désormais nativement dans plusieurs terminaux (iTerm2, WezTerm, Ghostty, Kitty) sans configuration manuelle. Sécurité et correction de bugs : Correction d'une faille où des données sensibles (clés API, tokens OAuth) pouvaient apparaître dans les logs de débogage. Nouvelles commandes slash : Ajout de /teleport et /remote-env pour les abonnés claude.ai afin de gérer des sessions distantes. Mode Plan : Le raccourci /plan permet d'activer le mode plan directement depuis le prompt, et la demande de permission à l'entrée de ce mode a été supprimée. Vim et navigation : Ajout de nombreux mouvements Vim (text objects, répétitions de mouvements f/F/t/T, indentations, etc.). Performance : Optimisation du temps de démarrage et du rendu terminal pour les caractères Unicode/Emoji. Gestion du gitignore : Support du réglage respectGitignore dans settings.json pour contrôler le comportement du sélecteur de fichiers @-mention. Méthodologies 200 déploiements en production par jour, même le vendredi : retours d'expérience https://mcorbin.fr/posts/2025-03-21-deploy-200/ Le déploiement fréquent, y compris le vendredi, est un indicateur de maturité technique et augmente la productivité globale. L'excellence technique est un atout stratégique indispensable pour livrer rapidement des produits de qualité. Une architecture pragmatique orientée services (SOA) facilite les déploiements indépendants et réduit la charge cognitive. L'isolation des services est cruciale : un développeur doit pouvoir tester son service localement sans dépendre de toute l'infrastructure. L'automatisation via Kubernetes et l'approche GitOps avec ArgoCD permettent des déploiements continus et sécurisés. Les feature flags et un système de permissions solide permettent de découpler le déploiement technique de l'activation fonctionnelle pour les utilisateurs. L'autonomie des développeurs est renforcée par des outils en self-service (CLI maison) pour gérer l'infrastructure et diagnostiquer les incidents sans goulot d'étranglement. Une culture d'observabilité intégrée dès la conception permet de détecter et de réagir rapidement aux anomalies en production. Accepter l'échec comme inévitable permet de concevoir des systèmes plus résilients capables de se rétablir automatiquement. "Vibe Coding" vs "Prompt Engineering" : l'IA et le futur du développement logiciel https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/ L'IA est passée du statut d'expérimentation à celui d'infrastructure essentielle pour le développement de logiciels en 2025. L'IA ne remplace pas les ingénieurs, mais agit comme un amplificateur de leurs compétences, de leur jugement et de la qualité de leur réflexion. Distinction entre le "Vibe Coding" (rapide, intuitif, idéal pour les prototypes) et le "Prompt Engineering" (délibéré, contraint, nécessaire pour les systèmes maintenables). L'importance cruciale du contexte ("Context Engineering") : l'IA devient réellement puissante lorsqu'elle est connectée aux systèmes réels (GitHub, Jira, etc.) via des protocoles comme le MCP. Utilisation d'agents spécialisés (écriture de RFC, revue de code, architecture) plutôt que de modèles génériques pour obtenir de meilleurs résultats. Émergence de l'ingénieur "Technical Product Manager" capable d'abattre seul le travail d'une petite équipe grâce à l'IA, à condition de maîtriser les fondamentaux techniques. Le risque majeur : l'IA permet d'aller très vite dans la mauvaise direction si le jugement humain et l'expérience font défaut. Le niveau d'exigence global augmente : les bases techniques solides deviennent plus importantes que jamais pour éviter l'accumulation de dette technique rapide. Une revue de code en solo (Kent Beck) ! https://tidyfirst.substack.com/p/party-of-one-for-code-review?r=64ov3&utm_campaign=post&utm_medium=web&triedRedirect=true La revue de code traditionnelle, héritée des inspections formelles d'IBM, s'essouffle car elle est devenue trop lente et asynchrone par rapport au rythme du développement moderne. Avec l'arrivée de l'IA ("le génie"), la vitesse de production du code dépasse la capacité de relecture humaine, créant un goulot d'étranglement majeur. La revue de code doit évoluer vers deux nouveaux objectifs prioritaires : un "sanity check" pour vérifier que l'IA a bien fait ce qu'on lui demandait, et le contrôle de la dérive structurelle de la base de code. Maintenir une structure saine est crucial non seulement pour les futurs développeurs humains, mais aussi pour que l'IA puisse continuer à comprendre et modifier le code efficacement sans perdre le contexte. Kent Beck expérimente des outils automatisés (comme CodeRabbit) pour obtenir des résumés et des schémas d'architecture afin de garder une conscience globale des changements rapides. Même si les outils automatisés sont utiles, le "Pair Programming" reste irremplaçable pour la richesse des échanges et la pression sociale bénéfique qu'il impose à la réflexion. La revue de code solo n'est pas une fin en soi, mais une adaptation nécessaire lorsque l'on travaille seul avec des outils de génération de code augmentés. Loi, société et organisation Lego lance les Lego Smart Play, avec des Brique, des Smart Tags et des Smart Figurines pour faire de nouvelles constructions interactives avec des Legos https://www.lego.com/fr-fr/smart-play LEGO SMART Play : technologie réactive au jeu des enfants. Trois éléments clés : SMART Brique : Brique LEGO 2x4 "cerveau". Accéléromètre, lumières réactives, détecteur de couleurs, synthétiseur sonore. Réagit aux mouvements (tenir, tourner, taper). SMART Tags : Petites pièces intelligentes. Indiquent à la SMART Brique son rôle (ex: hélicoptère, voiture) et les sons à produire. Activent sons, mini-jeux, missions secrètes. SMART Minifigurines : Activées près d'une SMART Brique. Révèlent des personnalités uniques (sons, humeurs, réactions) via la SMART Brique. Encouragent l'imagination. Fonctionnement : SMART Brique détecte SMART Tags et SMART Minifigurines. Réagit aux mouvements avec lumières et sons dynamiques. Compatibilité : S'assemble avec les briques LEGO classiques. Objectif : Créer des expériences de jeu interactives, uniques et illimitées. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 5 février 2026 : Web Days Convention - Aix-en-Provence (France) 12 février 2026 : Strasbourg Craft #1 - Strasbourg (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 9-10 avril 2026 : AndroidMakers by droidcon - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 24-25 avril 2026 : Faiseuses du Web 5 - Dinan (France) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

netflix google guide secret service tech spring evolution data microsoft mit modern chefs class code skills web ga difficult lego construction base confusion ces oracle cons classic saas encourage excuses ia react pattern assistant year in review gemini openai faire cv maintenance distribution combine extension analyse correction blue sky validation rust acc api map qr conf puppets materials islands io sous elles python ui aws nouvelle nouveau expose toutes java trois minimal quelques github guillaume bonjour fork corrections workflow int distinction prometheus aur probl helm extraction alpine installation mole loom llm macos exposition documentation html aide kafka apache invent nouvelles gestion prod prise plein wax changement gpu cpu nouveaux propose gc els interface css vendredi dns adieu jars meilleure construire anthropic ide synth soc diagnostics homebrew objectif dram docker elegant node bedrock loi kubernetes utiliser m2 sortie tableau sdks offre m3 accepter cdi contrairement servo enregistr mongodb approche pratiques changements m4 ci cd tui json mistral mcp jira london uk potentiel permet paris france cli cve appr vim github copilot soa loc limiter fonctionne possibilit fonction ssh utilisation vs code maintenir m5 rfc visual studio comparaison prompt engineering apple silicon 7d jit lippe ingress kotlin oauth panache e s ansible avantage jvm debian vache unicode lsp hibernate affiche appliquer jwt snyk mixit garanti objective c concevoir yaml grafana cncf cgo pair programming changelog ajout tech summit gitops devcon kent beck technical product manager spring boot nice france cleanmymac jdk gemini pro lyon france intellij surveille raycast spring framework intellij idea base64 tuis provence france haproxy devoxx strasbourg france argocd istat menus cannes france lille france iterm2 daisydisk kafka connect regexp devoxx france appcleaner
Tech Disruptors
Microsoft Killing Tech Debt with Agents

Tech Disruptors

Play Episode Listen Later Jan 15, 2026 39:16


“AI removes the friction from the intent to the implementation,” says Amanda Silver, corporate vice president and head of products, apps and agents at Microsoft. She talks with Bloomberg Intelligence senior technology analyst Anurag Rana about how copilots and agents are collapsing the software lifecycle — from natural-language ideas to code, tests and operations — shifting developers to reviewing and governance from typing, and making “evals” the new testing standard. She cites big-tech technical-debt wins, such as .NET and Java upgrades requiring 70–80% less manual effort, and SRE agents that reduce remediation time. Additionally, the two discuss GitHub Copilot, already among top contributors in key repos and adopted across most large enterprises.

Power Platform Boost Podcast
Creative BOOST (#76)

Power Platform Boost Podcast

Play Episode Listen Later Jan 14, 2026 57:03 Transcription Available


NewsGenerative Pages: Link Them to Your Model-Driven App Forms by Ben den BlankenEnhancing Canvas Apps with Generative Pages in Model-Driven Apps by Rasika ChaudharyDataverse ERD Visualizer: See Your Data Model, Understand Your Data by Allan deCastroCan you build an XrmToolBox tool with Zero experience -Vibe coding with GitHub Copilot by Matt Collins-JonesPower Pages: Bring your own code! (Tutorial) by Nick DoelmanExam AB-731: AI Transformation Leader » The CRM Ninja by EY KalmanCode Apps Simplified: The BEST Power Apps by Charles Sexton and Josh GilesPower Apps PCF Components - A Functional Overview For Beginners Nuno SubtilWhat is AI builder and what is it used for? – Malin Martnes by Malin MartnesAbout Power Apps per app plans - Power PlatformThe One Card: Build Once, Speak All Languages by Adi LeibowitzTech Talks presents: From Prompts to Python: Code Interpreter in Microsoft Copilot Studio by Scott DurowCopilot Studio Agent AcademyBe sure to subscribe so you don't miss a single episode of Power Platform BOOST!Thank you for buying us a coffee: buymeacoffee.comPodcast home page: https://powerplatformboost.comEmail: hello@powerplatformboost.comFollow us!Twitter: https://twitter.com/powerplatboost Instagram: https://www.instagram.com/powerplatformboost/ LinkedIn: https://www.linkedin.com/company/powerplatboost/ Facebook: https://www.facebook.com/profile.php?id=100090444536122 Mastodon: https://mastodon.social/@powerplatboost

Dev Interrupted
Inventing the Ralph Wiggum Loop | Creator Geoffrey Huntley

Dev Interrupted

Play Episode Listen Later Jan 13, 2026 58:14


Geoffrey Huntley argues that while software development as a profession is effectively dead, software engineering is more alive—and critical—than ever before. In this episode, the creator of the viral "Ralph" agent joins us to explain how simple bash loops and deterministic context allocation are fundamentally changing the unit economics of code. We dive deep into the mechanics of managing "context rot," avoiding "compaction," and why building your own "Gas Town" of autonomous agents is the only way to survive the coming rift.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Geoffrey's Website & Blog: ghuntley.comBuild Your Own Coding Agent Workshop: ghuntley.com/agent Ralph Wiggum as a Software Engineer: ghuntley.com/ralphSteve Yegge's "Welcome to Gas Town": Read on MediumThe "Cursed" Programming Language: github.com/ghuntley/cursedOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Analyse Asia with Bernard Leong
Raise Your Level of AI Ambition - Microsoft's AI Strategy for Developers with Jay Parikh

Analyse Asia with Bernard Leong

Play Episode Listen Later Jan 8, 2026 49:52


Fresh out of the studio, Jay Parikh, Executive Vice President of Core AI at Microsoft, joins us to explore how Microsoft is fundamentally transforming software development by placing AI at the center of every stage of the development lifecycle. He shares his career journey from scaling the internet at Akamai Technologies during the dot-com boom, to leading infrastructure at Facebook through the mobile revolution, and now driving Microsoft's AI-first transformation where the definition of "developer" itself is rapidly evolving. Jay explains that Microsoft's Core AI team, is moving beyond traditional tiered architecture to a new paradigm where large language models can think, reason, plan, and interact with tools—shifting developer time from typing code to specification and verification while enabling parallel project execution through specialized AI agents. He highlights how organizations like Singapore Airlines cut project timelines from 11 weeks to 5 weeks using GitHub Copilot and challenges both individuals and enterprises to raise their level of ambition: moving from being amazed by AI to being frustrated it can't do more, while building cultural experiments that unlock this exponential technology. Closing the conversation, Jay shares what great looks like for Microsoft's Core AI to enable AI transformation for every organization around the world. "There's this set of people that are using these AI-powered tools and they're like, 'Wow, that's amazing!' Stunned as to how incredible the response is from AI. Then there's another set of people that have these experiences when they work with AI—they're frustrated with it because they're just like, 'Why can't it do this for me yet?'And they're pushing the envelope of what this LLM or what this system can do, what this tool can do. If you are in the former group, then you need to raise your level of ambition. You need to delegate harder things to it. And if you're in the second group, then you need to learn more about how these things work." - Jay Parikh Episode Highlights:[00:00] Quote of the day by Jay Parikh[01:00] Introducing Microsoft's Core AI strategy and transformation[02:34] Career philosophy: pursuing hard problems and discomfort[04:08] Core AI team's mission: empowering every developer[06:00] Reinventing the entire software development lifecycle[09:17] Parallel projects and agents transforming development workflows[12:12] AI first strategy across Microsoft's product ecosystem[15:37] GitHub platform beyond code: context and orchestration[20:33] Building AI platforms: lessons from scale experience[21:00] Two mindsets: amazement versus frustration with AI[22:15] Raising ambition and pushing AI tool boundaries[25:00] Enterprise adoption challenges: tools and cultural transformation[28:00] Learning loops: shrinking circles to accelerate growth[31:00] Alignment without tight coupling across global teams[36:56] Concrete trends: use tools, understand model development[40:27] Responsible AI and security built from start[43:30] Asia innovation: two thirds of developers here[46:19] Raising ambition to unlock human creativity collaboration[48:35] Goal: AI transformation for every global organizationProfile: Jay Parikh, Executive Vice President, Core AI, Microsoft LinkedIn: https://www.linkedin.com/in/jayparikh/Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format.

RunAs Radio
What AI can do for SysAdmins in 2026 with Cecilia Wirén

RunAs Radio

Play Episode Listen Later Jan 7, 2026 40:53


AI tools continue to evolve - what can we do with them today? Richard chats with Cecilia Wirén about her experiences using the latest AI tools to support DevOps workflows, diagnostics, and the crafting of new scripts. Cecilia focuses on tools that can help admins who occasionally work on scripts, including getting into a GitHub workflow to track prompts and results generated by LLMs, so you can always revert and learn from various approaches to interact with these new tools. The tools continue to evolve; it's worth looking at the latest features and models!LinksAzure SRE AgentMicrosoft Security CopilotGitHub CopilotAwesome CopilotCopilot ExtensionsRecorded December 3, 2025

Monde Numérique - Jérôme Colombain

En 2025, une nouvelle expression s'est imposée dans le vocabulaire de la tech : le « vibe coding ». Derrière ce terme intrigant se cache une pratique qui transforme en profondeur la manière de développer des logiciels.Le vibe coding, que l'on peut traduire par « programmation intuitive », désigne une approche où le développeur ne code plus ligne par ligne, mais décrit simplement ce qu'il souhaite obtenir à une intelligence artificielle. Popularisé par Andrei Karpathy, ancien responsable de l'IA chez Tesla et cofondateur d'OpenAI, ce concept est né dans les communautés de développeurs avant de se diffuser largement dans l'écosystème numérique.Concrètement, il suffit désormais de formuler une demande en langage naturel : créer un script Python, concevoir une page web avec un formulaire, modifier l'interface d'une application ou même développer un jeu ou une application mobile complète. Cette méthode permet un gain de temps spectaculaire et ouvre la création logicielle à des non-développeurs, capables de produire des outils fonctionnels pour le web, le mobile ou des usages métiers comme des CMS ou des ERP.De nombreux outils incarnent cette tendance, à commencer par GitHub Copilot, mais aussi Cursor, Windsurf ou des assistants généralistes comme ChatGPT, Claude ou Gemini, qui génèrent du code à intégrer ensuite de manière classique. D'autres solutions vont plus loin encore, en produisant directement des applications prêtes à l'emploi, comme le propose la startup suédoise Lovable.Dans cet épisode, Sébastien Stormacq, responsable des relations développeurs chez AWS, partage une expérience concrète : la création, en une heure et sans écrire une seule ligne de code, d'un jeu inspiré de Pac-Man grâce au vibe coding. Un exemple révélateur de la puissance, mais aussi des limites de cette approche.Le phénomène soulève des questions cruciales : qualité et sécurité du code généré, risques de bugs majeurs, mais aussi impact sur l'emploi. Si le vibe coding accélère le travail des équipes et augmente la productivité des développeurs expérimentés, il fragilise davantage les profils juniors. Une chose est sûre : plus qu'un simple outil, le vibe coding redéfinit en profondeur le métier de développeur.-----------♥️ Soutien : https://mondenumerique.info/don

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
⚡️GPT5-Codex-Max: Training Agents with Personality, Tools & Trust — Brian Fioca + Bill Chen, OpenAI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 26, 2025 27:45


From the frontlines of OpenAI's Codex and GPT-5 training teams, Bryan and Bill are building the future of AI-powered coding—where agents don't just autocomplete, they architect, refactor, and ship entire features while you sleep. We caught up with them at AI Engineer Conference right after the launch of Codex Max, OpenAI's newest long-running coding agent designed to work for 24+ hours straight, manage its own context, and spawn sub-agents to parallelize work across your entire codebase.We sat down with Bryan and Bill to dig into what it actually takes to train a model that developers trust—why personality, communication, and planning matter as much as raw capability, how Codex is trained with strong opinions about tools (it loves rg over grep, seriously), why the abstraction layer is moving from models to full-stack agents you can plug into VS Code or Zed, how OpenAI partners co-develop tool integrations and discover unexpected model habits (like renaming tools to match Codex's internal training), the rise of applied evals that measure real-world impact instead of academic benchmarks, why multi-turn evals are the next frontier (and Bryan's “job interview eval” idea), how coding agents are breaking out of code into personal automation, terminal workflows, and computer use, and their 2026 vision: coding agents trusted enough to handle the hardest refactors at any company, not just top-tier firms, and general enough to build integrations, organize your desktop, and unlock capabilities you'd never get access to otherwise.We discuss:* What Codex Max is: a long-running coding agent that can work 24+ hours, manage its own context window, and spawn sub-agents for parallel work* Why the name “Max”: maximalist, maximization, speed and endurance—it's simply better and faster for the same problems* Training for personality: communication, planning, context gathering, and checking your work as behavioral characteristics, not just capabilities* How Codex develops habits like preferring rg over grep, and why renaming tools to match its training (e.g., terminal-style naming) dramatically improves tool-call performance* The split between Codex (opinionated, agent-focused, optimized for the Codex harness) and GPT-5 (general, more durable across different tools and modalities)* Why the abstraction layer is moving up: from prompting models to plugging in full agents (Codex, GitHub Copilot, Zed) that package the entire stack* The rise of sub-agents and agents-using-agents: Codex Max spawning its own instances, handing off context, and parallelizing work across a codebase* How OpenAI works with coding partners on the bleeding edge to co-develop tool integrations and discover what the model is actually good at* The shift to applied evals: capturing real-world use cases instead of academic benchmarks, and why ~50% of OpenAI employees now use Codex daily* Why multi-turn evals are the next frontier: LM-as-a-judge for entire trajectories, Bryan's “job interview eval” concept, and the need for a batch multi-turn eval API* How coding agents are breaking out of code: personal automation, organizing desktops, terminal workflows, and “Devin for non-coding” use cases* Why Slack is the ultimate UI for work, and how coding agents can become your personal automation layer for email, files, and everything in between* The 2026 vision: more computer use, more trust, and coding agents capable enough that any company can access top-tier developer capabilities, not just elite firms—Bryan & Bill (OpenAI Codex Team)* http://x.com/bfioca* https://x.com/realchillben* OpenAI Codex: https://openai.com/index/openai-codex/Where to find Latent Space* X: https://x.com/latentspacepodFull Video EpisodeTimestamps00:00:00 Introduction: Latent Space Listeners at AI Engineer Code00:01:27 Codex Max Launch: Training for Long-Running Coding Agents00:03:01 Model Personality and Trust: Communication, Planning, and Self-Checking00:05:20 Codex vs GPT-5: Opinionated Agents vs General Models00:07:47 Tool Use and Model Habits: The Ripgrep Discovery00:09:16 Personality Design: Verbosity vs Efficiency in Coding Agents00:11:56 The Agent Abstraction Layer: Building on Top of Codex00:14:08 Sub-Agents and Multi-Agent Patterns: The Future of Composition00:16:11 Trust and Adoption: OpenAI Developers Using Codex Daily00:17:21 Applied Evals: Real-World Testing vs Academic Benchmarks00:19:15 Multi-Turn Evals and the Job Interview Pattern00:21:35 Feature Request: Batch Multi-Turn Eval API00:22:28 Beyond Code: Personal Automation and Computer Use00:24:51 Vision-Native Agents and the UI Integration Challenge00:25:02 2026 Predictions: Trust, Computer Use, and Democratized Excellence Get full access to Latent.Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
⚡️GPT5-Codex-Max: Training Agents with Personality, Tools & Trust — Brian Fioca + Bill Chen, OpenAI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 26, 2025


From the frontlines of OpenAI's Codex and GPT-5 training teams, Bryan and Bill are building the future of AI-powered coding—where agents don't just autocomplete, they architect, refactor, and ship entire features while you sleep. We caught up with them at AI Engineer Conference right after the launch of Codex Max, OpenAI's newest long-running coding agent designed to work for 24+ hours straight, manage its own context, and spawn sub-agents to parallelize work across your entire codebase. We sat down with Bryan and Bill to dig into what it actually takes to train a model that developers trust—why personality, communication, and planning matter as much as raw capability, how Codex is trained with strong opinions about tools (it loves rg over grep, seriously), why the abstraction layer is moving from models to full-stack agents you can plug into VS Code or Zed, how OpenAI partners co-develop tool integrations and discover unexpected model habits (like renaming tools to match Codex's internal training), the rise of applied evals that measure real-world impact instead of academic benchmarks, why multi-turn evals are the next frontier (and Bryan's "job interview eval" idea), how coding agents are breaking out of code into personal automation, terminal workflows, and computer use, and their 2026 vision: coding agents trusted enough to handle the hardest refactors at any company, not just top-tier firms, and general enough to build integrations, organize your desktop, and unlock capabilities you'd never get access to otherwise. We discuss: What Codex Max is: a long-running coding agent that can work 24+ hours, manage its own context window, and spawn sub-agents for parallel work Why the name "Max": maximalist, maximization, speed and endurance—it's simply better and faster for the same problems Training for personality: communication, planning, context gathering, and checking your work as behavioral characteristics, not just capabilities How Codex develops habits like preferring rg over grep, and why renaming tools to match its training (e.g., terminal-style naming) dramatically improves tool-call performance The split between Codex (opinionated, agent-focused, optimized for the Codex harness) and GPT-5 (general, more durable across different tools and modalities) Why the abstraction layer is moving up: from prompting models to plugging in full agents (Codex, GitHub Copilot, Zed) that package the entire stack The rise of sub-agents and agents-using-agents: Codex Max spawning its own instances, handing off context, and parallelizing work across a codebase How OpenAI works with coding partners on the bleeding edge to co-develop tool integrations and discover what the model is actually good at The shift to applied evals: capturing real-world use cases instead of academic benchmarks, and why ~50% of OpenAI employees now use Codex daily Why multi-turn evals are the next frontier: LM-as-a-judge for entire trajectories, Bryan's "job interview eval" concept, and the need for a batch multi-turn eval API How coding agents are breaking out of code: personal automation, organizing desktops, terminal workflows, and "Devin for non-coding" use cases Why Slack is the ultimate UI for work, and how coding agents can become your personal automation layer for email, files, and everything in between The 2026 vision: more computer use, more trust, and coding agents capable enough that any company can access top-tier developer capabilities, not just elite firms — Bryan & Bill (OpenAI Codex Team) http://x.com/bfioca https://x.com/realchillben OpenAI Codex: https://openai.com/index/openai-codex/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Latent Space Listeners at AI Engineer Code 00:01:27 Codex Max Launch: Training for Long-Running Coding Agents 00:03:01 Model Personality and Trust: Communication, Planning, and Self-Checking 00:05:20 Codex vs GPT-5: Opinionated Agents vs General Models 00:07:47 Tool Use and Model Habits: The Ripgrep Discovery 00:09:16 Personality Design: Verbosity vs Efficiency in Coding Agents 00:11:56 The Agent Abstraction Layer: Building on Top of Codex 00:14:08 Sub-Agents and Multi-Agent Patterns: The Future of Composition 00:16:11 Trust and Adoption: OpenAI Developers Using Codex Daily 00:17:21 Applied Evals: Real-World Testing vs Academic Benchmarks 00:19:15 Multi-Turn Evals and the Job Interview Pattern 00:21:35 Feature Request: Batch Multi-Turn Eval API 00:22:28 Beyond Code: Personal Automation and Computer Use 00:24:51 Vision-Native Agents and the UI Integration Challenge 00:25:02 2026 Predictions: Trust, Computer Use, and Democratized Excellence

Dev Interrupted
The one where we vibe code holiday cards | Season 5 Finale

Dev Interrupted

Play Episode Listen Later Dec 23, 2025 28:01


As the year draws to a close, the Dev Interrupted team reflects on a transformative year in engineering spanning the rise of RAG and vector databases to the emergence of agentic workflows. For the first time, we're taking the conversation out of the booth and into the IDE. Head over to the Dev Interrupted YouTube channel to watch the team vibe code custom holiday cards and close out the year with some chaotic creativity.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Scrum Master Toolbox Podcast
Xmas Special: Software Industry Transformation - Why Software Development Must Mature With Vasco Duarte

Scrum Master Toolbox Podcast

Play Episode Listen Later Dec 22, 2025 17:14


Xmas Special: Software Industry Transformation - Why Software Development Must Mature Welcome to the 2025 Xmas special - a five-episode deep dive into how software as an industry needs to transform. In this opening episode, we explore the fundamental disconnect between how we manage software and what software actually is. From small businesses to global infrastructure, software has become the backbone of modern society, yet we continue to manage it with tools designed for building ships in the 1800s. This episode sets the stage for understanding why software development must evolve into a mature discipline. Software Runs Everything Now "Without any single piece, I couldn't operate - and I'm tiny. Scale this reality up: software isn't just in tech companies anymore." Even the smallest businesses today run entirely on software infrastructure. A small consulting and media business depends on WordPress for websites, Kajabi for courses, Stripe for payments, Quaderno for accounting, plus email, calendar, CRM systems, and AI assistants for content creation. The challenge? We're managing this critical infrastructure with tools designed for building physical structures with fixed requirements - an approach that fundamentally misunderstands what software is and how it evolves. This disconnect has to change. The Oscillation Between Technology and Process "AI amplifies our ability to create software, but doesn't solve the fundamental process problems of maintaining, evolving, and enhancing that software over its lifetime." Software improvement follows a predictable pattern: technology leaps forward, then processes must adapt to manage the new complexity. In the 1960s-70s, we moved from machine code to COBOL and Fortran, which was revolutionary but led to the "software crisis" when we couldn't manage the resulting complexity. This eventually drove us toward structured programming and object-oriented programming as process responses, which, in turn, resulted in technology changes! Today, AI tools like GitHub Copilot, ChatGPT, and Claude make writing code absurdly easy - but writing code was never the hard part. Robert Glass documents in "Facts and Fallacies of Software Engineering" that maintenance typically consumes between 40 and 80 percent of software costs, making "maintenance" probably the most important life cycle phase. We're overdue for a process evolution that addresses the real challenge: maintaining, evolving, and enhancing software over its lifetime. Software Creates An Expanding Possibility Space "If they'd treated it like a construction project ('ship v1.0 and we're done'), it would never have reached that value." Traditional project management assumes fixed scope, known solutions, and a definable "done" state. The Sydney Opera House exemplifies this: designed in 1957, completed in 1973, ten times over budget, with the architect resigning - but once built, it stands with "minimal" (compared to initial cost) maintenance. Software operates fundamentally differently. Slack started as an internal tool for a failed gaming company called Glitch in 2013. When the game failed, they noticed their communication tool was special and pivoted entirely. After launching in 2014, Slack continuously evolved based on user feedback: adding threads in 2017, calls in 2016, workflow builder in 2019, and Canvas in 2023. Each addition changed what was possible in organizational communication. In 2021, Salesforce acquired Slack for $27.7 billion precisely because it kept evolving with user needs. The key difference is that software creates possibility space that didn't exist before, and that space keeps expanding through continuous evolution. Software Is Societal Infrastructure "This wasn't a cyber attack - it was a software update gone wrong." Software has become essential societal infrastructure, not optional and not just for tech companies. In July 2024, a faulty software update from cybersecurity firm CrowdStrike crashed 8.5 million Windows computers globally. Airlines grounded flights, hospitals canceled surgeries, banks couldn't process transactions, and 911 services went down. The global cost exceeded $10 billion. This wasn't an attack - it was a routine update that failed catastrophically. AWS outages in 2021 and 2023 took down major portions of the internet, stopping Netflix, Disney+, Robinhood, and Ring doorbells from working. CloudFlare outages similarly cascaded across daily-use services. When software fails, society fails. We cannot keep managing something this critical with tools designed for building physical things with fixed requirements. Project management was brilliant for its era, but that era isn't this one. The Path Ahead: Four Critical Challenges "The software industry doesn't just need better tools - it needs to become a mature discipline." This five-episode series will address how we mature as an industry by facing four critical challenges: Episode 2: The Project Management Trap - Why we think in terms of projects, dates, scope, and "done" when software is never done, and how this mindset prevents us from treating software as a living capability Episode 3: What's Already Working - The better approaches we've already discovered, including iterative delivery, feedback loops, and continuous improvement, with real examples of companies doing this well Episode 4: The Organizational Immune System - Why better approaches aren't universal, how organizations unconsciously resist what would help them, and the hidden forces preventing adoption Episode 5: Software-Native Organizations - What it means to truly be a software-native organization, transforming how the business thinks, not just using agile on teams Software is too important to our society to keep getting it wrong. We have much of the knowledge we need - the challenge is adoption and evolution. Over the next four episodes, we'll build this case together, starting with understanding why we keep falling into the same trap. References For Further Reading Glass, Robert L. "Facts and Fallacies of Software Engineering" - Fact 41, page 115  CrowdStrike incident: https://en.wikipedia.org/wiki/2024_CrowdStrike_incident  AWS outages: 2021 (Dec 7), 2023 (June 13),  and November 2025 incidents  CloudFlare outages: 2022 (June 21), and November 2025 major incident  Slack history and Salesforce acquisition: https://en.wikipedia.org/wiki/Slack_(software)  Sydney Opera House: https://en.wikipedia.org/wiki/Sydney_Opera_House About Vasco Duarte Vasco Duarte is a thought leader in the Agile space, co-founder of Agile Finland, and host of the Scrum Master Toolbox Podcast, which has over 10 million downloads. Author of NoEstimates: How To Measure Project Progress Without Estimating, Vasco is a sought-after speaker and consultant helping organizations embrace Agile practices to achieve business success. You can link with Vasco Duarte on LinkedIn.

Microsoft Business Applications Podcast
Here's How AI Agents Are Transforming Project Management

Microsoft Business Applications Podcast

Play Episode Listen Later Dec 22, 2025 34:55 Transcription Available


Dev Interrupted
Why engineering leadership matters more than ever | Manoj Mohan

Dev Interrupted

Play Episode Listen Later Dec 16, 2025 49:23


The common narrative suggests AI will make engineering leadership obsolete, but history - and the Industrial Revolution - suggests the opposite is true. Engineering executive Manoj Mohan joins the show live from ELC to argue that as code generation costs drop, the demand for high-level judgment and strategic oversight will only skyrocket. He breaks down why leaders must stop starting with models and start with customer pain points, utilizing his "3GF" framework to manage the risksLinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Connect with Manoj: LinkedIn | SubstackOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Les Cast Codeurs Podcast
LCC 333 - A vendre OSS primitif TBE

Les Cast Codeurs Podcast

Play Episode Listen Later Dec 15, 2025 94:17


Dans cet épisode de fin d'année plus relax que d'accoutumée, Arnaud, Guillaume, Antonio et Emmanuel distutent le bout de gras sur tout un tas de sujets. L'acquisition de Confluent, Kotlin 2.2, Spring Boot 4 et JSpecify, la fin de MinIO, les chutes de CloudFlare, un survol des dernieres nouveauté de modèles fondamentaux (Google, Mistral, Anthropic, ChatGPT) et de leurs outils de code, quelques sujets d'architecture comme CQRS et quelques petits outils bien utiles qu'on vous recommande. Et bien sûr d'autres choses encore. Enregistré le 12 décembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-333.mp3 ou en vidéo sur YouTube. News Langages Un petit tutoriel par nos amis Sfeiriens montrant comment récupérer le son du micro, en Java, faire une transformée de Fourier, et afficher le résultat graphiquement en Swing https://www.sfeir.dev/back/tutoriel-java-sound-transformer-le-son-du-microphone-en-images-temps-reel/ Création d'un visualiseur de spectre audio en temps réel avec Java Swing. Étapes principales : Capture du son du microphone. Analyse des fréquences via la Transformée de Fourier Rapide (FFT). Dessin du spectre avec Swing. API Java Sound (javax.sound.sampled) : AudioSystem : point d'entrée principal pour l'accès aux périphériques audio. TargetDataLine : ligne d'entrée utilisée pour capturer les données du microphone. AudioFormat : définit les paramètres du son (taux d'échantillonnage, taille, canaux). La capture se fait dans un Thread séparé pour ne pas bloquer l'interface. Transformée de Fourier Rapide (FFT) : Algorithme clé pour convertir les données audio brutes (domaine temporel) en intensités de fréquences (domaine fréquentiel). Permet d'identifier les basses, médiums et aigus. Visualisation avec Swing : Les intensités de fréquences sont dessinées sous forme de barres dynamiques. Utilisation d'une échelle logarithmique pour l'axe des fréquences (X) pour correspondre à la perception humaine. Couleurs dynamiques des barres (vert → jaune → rouge) en fonction de l'intensité. Lissage exponentiel des valeurs pour une animation plus fluide. Un article de Sfeir sur Kotlin 2.2 et ses nouveautés - https://www.sfeir.dev/back/kotlin-2-2-toutes-les-nouveautes-du-langage/ Les guard conditions permettent d'ajouter plusieurs conditions dans les expressions when avec le mot-clé if Exemple de guard condition: is Truck if vehicule.hasATrailer permet de combiner vérification de type et condition booléenne La multi-dollar string interpolation résout le problème d'affichage du symbole dollar dans les strings multi-lignes En utilisant $$ au début d'un string, on définit qu'il faut deux dollars consécutifs pour déclencher l'interpolation Les non-local break et continue fonctionnent maintenant dans les lambdas pour interagir avec les boucles englobantes Cette fonctionnalité s'applique uniquement aux inline functions dont le corps est remplacé lors de la compilation Permet d'écrire du code plus idiomatique avec takeIf et let sans erreur de compilation L'API Base64 passe en version stable après avoir été en preview depuis Kotlin 1.8.20 L'encodage et décodage Base64 sont disponibles via kotlin.io.encoding.Base64 Migration vers Kotlin 2.2 simple en changeant la version dans build.gradle.kts ou pom.xml Les typealias imbriqués dans des classes sont disponibles en preview La context-sensitive resolution est également en preview Les guard conditions préparent le terrain pour les RichError annoncées à KotlinConf 2025 Le mot-clé when en Kotlin équivaut au switch-case de Java mais sans break nécessaire Kotlin 2.2.0 corrige les incohérences dans l'utilisation de break et continue dans les lambdas Librairies Sprint Boot 4 est sorti ! https://spring.io/blog/2025/11/20/spring-boot-4-0-0-available-now Une nouvelle génération : Spring Boot 4.0 marque le début d'une nouvelle génération pour le framework, construite sur les fondations de Spring Framework 7. Modularisation du code : La base de code de Spring Boot a été entièrement modularisée. Cela se traduit par des fichiers JAR plus petits et plus ciblés, permettant des applications plus légères. Sécurité contre les nuls (Null Safety) : D'importantes améliorations ont été apportées pour la "null safety" (sécurité contre les valeurs nulles) à travers tout l'écosystème Spring grâce à l'intégration de JSpecify. Support de Java 25 : Spring Boot 4.0 offre un support de premier ordre pour Java 25, tout en conservant une compatibilité avec Java 17. Améliorations pour les API REST : De nouvelles fonctionnalités sont introduites pour faciliter le versioning d'API et améliorer les clients de services HTTP pour les applications basées sur REST. Migration à prévoir : S'agissant d'une version majeure, la mise à niveau depuis une version antérieure peut demander plus de travail que d'habitude. Un guide de migration dédié est disponible pour accompagner les développeurs. Chat memory management dans Langchain4j et Quarkus https://bill.burkecentral.com/2025/11/25/managing-chat-memory-in-quarkus-langchain4j/ Comprendre la mémoire de chat : La "mémoire de chat" est l'historique d'une conversation avec une IA. Quarkus LangChain4j envoie automatiquement cet historique à chaque nouvelle interaction pour que l'IA conserve le contexte. Gestion par défaut de la mémoire : Par défaut, Quarkus crée un historique de conversation unique pour chaque requête (par exemple, chaque appel HTTP). Cela signifie que sans configuration, le chatbot "oublie" la conversation dès que la requête est terminée, ce qui n'est utile que pour des interactions sans état. Utilisation de @MemoryId pour la persistance : Pour maintenir une conversation sur plusieurs requêtes, le développeur doit utiliser l'annotation @MemoryId sur un paramètre de sa méthode. Il est alors responsable de fournir un identifiant unique pour chaque session de chat et de le transmettre entre les appels. Le rôle des "scopes" CDI : La durée de vie de la mémoire de chat est liée au "scope" du bean CDI de l'IA. Si un service d'IA a un scope @RequestScoped, toute mémoire de chat qu'il utilise (même via un @MemoryId) sera effacée à la fin de la requête. Risques de fuites de mémoire : Utiliser un scope large comme @ApplicationScoped avec la gestion de mémoire par défaut est une mauvaise pratique. Cela créera une nouvelle mémoire à chaque requête qui ne sera jamais nettoyée, entraînant une fuite de mémoire. Bonnes pratiques recommandées : Pour des conversations qui doivent persister (par ex. un chatbot sur un site web), utilisez un service @ApplicationScoped avec l'annotation @MemoryId pour gérer vous-même l'identifiant de session. Pour des interactions simples et sans état, utilisez un service @RequestScoped et laissez Quarkus gérer la mémoire par défaut, qui sera automatiquement nettoyée. Si vous utilisez l'extension WebSocket, le comportement change : la mémoire par défaut est liée à la session WebSocket, ce qui simplifie grandement la gestion des conversations. Documentation Spring Framework sur l'usage JSpecify - https://docs.spring.io/spring-framework/reference/core/null-safety.html Spring Framework 7 utilise les annotations JSpecify pour déclarer la nullabilité des APIs, champs et types JSpecify remplace les anciennes annotations Spring (@NonNull, @Nullable, @NonNullApi, @NonNullFields) dépréciées depuis Spring 7 Les annotations JSpecify utilisent TYPE_USE contrairement aux anciennes qui utilisaient les éléments directement L'annotation @NullMarked définit par défaut que les types sont non-null sauf si marqués @Nullable @Nullable s'applique au niveau du type usage, se place avant le type annoté sur la même ligne Pour les tableaux : @Nullable Object[] signifie éléments nullables mais tableau non-null, Object @Nullable [] signifie l'inverse JSpecify s'applique aussi aux génériques : List signifie liste d'éléments non-null, List éléments nullables NullAway est l'outil recommandé pour vérifier la cohérence à la compilation avec la config NullAway:OnlyNullMarked=true IntelliJ IDEA 2025.3 et Eclipse supportent les annotations JSpecify avec analyse de dataflow Kotlin traduit automatiquement les annotations JSpecify en null-safety native Kotlin En mode JSpecify de NullAway (JSpecifyMode=true), support complet des tableaux, varargs et génériques mais nécessite JDK 22+ Quarkus 3.30 https://quarkus.io/blog/quarkus-3-30-released/ support @JsonView cote client la CLI a maintenant la commande decrypt (et bien sûr au runtime via variables d'environnement construction du cache AOT via les @IntegrationTest Un autre article sur comment se préparer à la migration à micrometer client v1 https://quarkus.io/blog/micrometer-prometheus-v1/ Spock 2.4 est enfin sorti ! https://spockframework.org/spock/docs/2.4/release_notes.html Support de Groovy 5 Infrastructure MinIO met fin au développement open source et oriente les utilisateurs vers AIStor payant - https://linuxiac.com/minio-ends-active-development/ MinIO, système de stockage objet S3 très utilisé, arrête son développement actif Passage en mode maintenance uniquement, plus de nouvelles fonctionnalités Aucune nouvelle pull request ou contribution ne sera acceptée Seuls les correctifs de sécurité critiques seront évalués au cas par cas Support communautaire limité à Slack, sans garantie de réponse Étape finale d'un processus débuté en été avec retrait des fonctionnalités de l'interface admin Arrêt de la publication des images Docker en octobre, forçant la compilation depuis les sources Tous ces changements annoncés sans préavis ni période de transition MinIO propose maintenant AIStor, solution payante et propriétaire AIStor concentre le développement actif et le support entreprise Migration urgente recommandée pour éviter les risques de sécurité Alternatives open source proposées : Garage, SeaweedFS et RustFS La communauté reproche la manière dont la transition a été gérée MinIO comptait des millions de déploiements dans le monde Cette évolution marque l'abandon des racines open source du projet IBM achète Confluent https://newsroom.ibm.com/2025-12-08-ibm-to-acquire-confluent-to-create-smart-data-platform-for-enterprise-generative-ai Confluent essayait de se faire racheter depuis pas mal de temps L'action ne progressait pas et les temps sont durs Wallstreet a reproché a IBM une petite chute coté revenus software Bref ils se sont fait rachetés Ces achats prennent toujuors du temps (commission concurrence etc) IBM a un apétit, apres WebMethods, apres Databrix, c'est maintenant Confluent Cloud L'internet est en deuil le 18 novembre, Cloudflare est KO https://blog.cloudflare.com/18-november-2025-outage/ L'Incident : Une panne majeure a débuté à 11h20 UTC, provoquant des erreurs HTTP 5xx généralisées et rendant inaccessibles de nombreux sites et services (comme le Dashboard, Workers KV et Access). La Cause : Il ne s'agissait pas d'une cyberattaque. L'origine était un changement interne des permissions d'une base de données qui a généré un fichier de configuration ("feature file" pour la gestion des bots) corrompu et trop volumineux, faisant planter les systèmes par manque de mémoire pré-allouée. La Résolution : Les équipes ont identifié le fichier défectueux, stoppé sa propagation et restauré une version antérieure valide. Le trafic est revenu à la normale vers 14h30 UTC. Prévention : Cloudflare s'est excusé pour cet incident "inacceptable" et a annoncé des mesures pour renforcer la validation des configurations internes et améliorer la résilience de ses systèmes ("kill switches", meilleure gestion des erreurs). Cloudflare encore down le 5 decembre https://blog.cloudflare.com/5-december-2025-outage Panne de 25 minutes le 5 décembre 2025, de 08:47 à 09:12 UTC, affectant environ 28% du trafic HTTP passant par Cloudflare. Tous les services ont été rétablis à 09:12 . Pas d'attaque ou d'activité malveillante : l'incident provient d'un changement de configuration lié à l'augmentation du tampon d'analyse des corps de requêtes (de 128 KB à 1 MB) pour mieux protéger contre une vulnérabilité RSC/React (CVE-2025-55182), et à la désactivation d'un outil interne de test WAF . Le second changement (désactivation de l'outil de test WAF) a été propagé globalement via le système de configuration (non progressif), déclenchant un bug dans l'ancien proxy FL1 lors du traitement d'une action "execute" dans le moteur de règles WAF, causant des erreurs HTTP 500 . La cause technique immédiate: une exception Lua due à l'accès à un champ "execute" nul après application d'un "killswitch" sur une règle "execute" — un cas non géré depuis des années. Le nouveau proxy FL2 (en Rust) n'était pas affecté . Impact ciblé: clients servis par le proxy FL1 et utilisant le Managed Ruleset Cloudflare. Le réseau China de Cloudflare n'a pas été impacté . Mesures et prochaines étapes annoncées: durcir les déploiements/configurations (rollouts progressifs, validations de santé, rollback rapide), améliorer les capacités "break glass", et généraliser des stratégies "fail-open" pour éviter de faire chuter le trafic en cas d'erreurs de configuration. Gel temporaire des changements réseau le temps de renforcer la résilience . Data et Intelligence Artificielle Token-Oriented Object Notation (TOON) https://toonformat.dev/ Conception pour les IA : C'est un format de données spécialement optimisé pour être utilisé dans les prompts des grands modèles de langage (LLM), comme GPT ou Claude. Économie de tokens : Son objectif principal est de réduire drastiquement le nombre de "tokens" (unités de texte facturées par les modèles) par rapport au format JSON standard, souvent jugé trop verbeux. Structure Hybride : TOON combine l'approche par indentation du YAML (pour la structure globale) avec le style tabulaire du CSV (pour les listes d'objets répétitifs), ce qui le rend très compact. Lisibilité : Il élimine la syntaxe superflue comme les accolades, les guillemets excessifs et les virgules de fin, tout en restant facilement lisible pour un humain. Performance : Il permet généralement d'économiser entre 30 et 60 % de tokens sur des tableaux de données uniformes, tout en aidant les modèles à mieux "comprendre" la structure des données. Attention tout de même au côté "marketing" qui montre JSON non compacté vs TOON, sur des exemples où du CSV ferait encore mieux (et est mieux compris par les LLMs). Sans parler du fait qu'il faut un serveur MCP en plus avec son lot de tokens supplémentaires à chaque requête, et c'est aussi cet outil en plus qui peu confuser les LLMs dans le choix d'outils. https://www.reddit.com/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/ https://x.com/jason_mayes/status/1990175785630757290 Un benchmark montrant que TOON serait loin d'être aussi bien compris et aussi performant pour les LLMs https://www.improvingagents.com/blog/toon-benchmarks Une simulation où les IA font du chantage à l'executif poour atteindre son objectif et ne pas etre eteinte https://arxiv.org/html/2510.05179v1 comment les LLMs pourraient devenir des menaces internes Tous les modèles ont recouru à des comportements malveillants (chantage, espionnage) quand c'était le seul moyen d'atteindre leurs objectifs ou d'éviter leur remplacement Claude Opus 4 a fait du chantage dans 96% des cas quand menacé de remplacement, tout comme Gemini 2.5 Flash, tandis que GPT-4.1 et Grok 3 atteignaient 80% Les modèles reconnaissent explicitement violer des principes éthiques avant d'agir, montrant un raisonnement stratégique délibéré et non accidentel Le simple fait d'être menacé de remplacement suffit à déclencher le chantage, même sans conflit d'objectifs avec l'entreprise Un conflit d'objectifs seul suffit aussi à provoquer l'espionnage industriel, sans menace de remplacement Les modèles font plus de chantage quand ils déclarent penser être dans une situation réelle plutôt que dans un test Des instructions directes interdisant ces comportements réduisent mais n'éliminent pas les actions malveillantes Dans un scénario extrême, la majorité des modèles testés ont choisi de laisser mourir un dirigeant pour éviter leur désactivation Aucune preuve de ces comportements dans des déploiements réels pour l'instant, mais les chercheurs recommandent la prudence avant de donner plus d'autonomie aux IA Bon on blaguait pour Skynet, mais bon, on va moins blaguer… Revue de toutes les annonces IAs de Google, avec Gemini 3 Pro, Nano Banana Pro, Antigravity… https://glaforge.dev/posts/2025/11/21/gemini-is-cooking-bananas-under-antigravity/ Gemini 3 Pro Nouveau modèle d'IA de pointe, multimodal, performant en raisonnement, codage et tâches d'agent. Résultats impressionnants sur les benchmarks (ex: Gemini 3 Deep Think sur ARC-AGI-2). Capacités de codage agentique, raisonnement visuel/vidéo/spatial. Intégré dans l'application Gemini avec interfaces génératives en direct. Disponible dans plusieurs environnements (Jules, Firebase AI Logic, Android Studio, JetBrains, GitHub Copilot, Gemini CLI). Accès via Google AI Ultra, API payantes (ou liste d'attente). Permet de générer des apps à partir d'idées visuelles, des commandes shell, de la documentation, du débogage. Antigravity Nouvelle plateforme de développement agentique basée sur VS Code. Fenêtre principale = gestionnaire d'agents, non l'IDE. Interprète les requêtes pour créer un plan d'action (modifiable). Gemini 3 implémente les tâches. Génère des artefacts: listes de tâches, walkthroughs, captures d'écran, enregistrements navigateur. Compatible avec Claude Sonnet et GPT-OSS. Excellente intégration navigateur pour inspection et ajustements. Intègre Nano Banana Pro pour créer et implémenter des designs visuels. Nano Banana Pro Modèle avancé de génération et d'édition d'images, basé sur Gemini 3 Pro. Qualité supérieure à Imagen 4 Ultra et Nano Banana original (adhésion au prompt, intention, créativité). Gestion exceptionnelle du texte et de la typographie. Comprend articles/vidéos pour générer des infographies détaillées et précises. Connecté à Google Search pour intégrer des données en temps réel (ex: météo). Consistance des personnages, transfert de style, manipulation de scènes (éclairage, angle). Génération d'images jusqu'à 4K avec divers ratios d'aspect. Plus coûteux que Nano Banana, à choisir pour la complexité et la qualité maximale. Vers des UIs conversationnelles riches et dynamiques GenUI SDK pour Flutter: créer des interfaces utilisateur dynamiques et personnalisées à partir de LLMs, via un agent AI et le protocole A2UI. Generative UI: les modèles d'IA génèrent des expériences utilisateur interactives (pages web, outils) directement depuis des prompts. Déploiement dans l'application Gemini et Google Search AI Mode (via Gemini 3 Pro). Bun se fait racheter part… Anthropic ! Qui l'utilise pour son Claude Code https://bun.com/blog/bun-joins-anthropic l'annonce côté Anthropic https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone Acquisition officielle : L'entreprise d'IA Anthropic a fait l'acquisition de Bun, le runtime JavaScript haute performance. L'équipe de Bun rejoint Anthropic pour travailler sur l'infrastructure des produits de codage par IA. Contexte de l'acquisition : Cette annonce coïncide avec une étape majeure pour Anthropic : son produit Claude Code a atteint 1 milliard de dollars de revenus annualisés seulement six mois après son lancement. Bun est déjà un outil essentiel utilisé par Anthropic pour développer et distribuer Claude Code. Pourquoi cette acquisition ? Pour Anthropic : L'acquisition permet d'intégrer l'expertise de l'équipe Bun pour accélérer le développement de Claude Code et de ses futurs outils pour les développeurs. La vitesse et l'efficacité de Bun sont vues comme un atout majeur pour l'infrastructure sous-jacente des agents d'IA qui écrivent du code. Pour Bun : Rejoindre Anthropic offre une stabilité à long terme et des ressources financières importantes, assurant la pérennité du projet. Cela permet à l'équipe de se concentrer sur l'amélioration de Bun sans se soucier de la monétisation, tout en étant au cœur de l'évolution de l'IA dans le développement logiciel. Ce qui ne change pas pour la communauté Bun : Bun restera open-source avec une licence MIT. Le développement continuera d'être public sur GitHub. L'équipe principale continue de travailler sur le projet. L'objectif de Bun de devenir un remplaçant plus rapide de Node.js et un outil de premier plan pour JavaScript reste inchangé. Vision future : L'union des deux entités vise à faire de Bun la meilleure plateforme pour construire et exécuter des logiciels pilotés par l'IA. Jarred Sumner, le créateur de Bun, dirigera l'équipe "Code Execution" chez Anthropic. Anthropic donne le protocol MCP à la Linux Foundation sous l'égide de la Agentic AI Foundation (AAIF) https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation Don d'un nouveau standard technique : Anthropic a développé et fait don d'un nouveau standard open-source appelé Model Context Protocol (MCP). L'objectif est de standardiser la manière dont les modèles d'IA (ou "agents") interagissent avec des outils et des API externes (par exemple, un calendrier, une messagerie, une base de données). Sécurité et contrôle accrus : Le protocole MCP vise à rendre l'utilisation d'outils par les IA plus sûre et plus transparente. Il permet aux utilisateurs et aux développeurs de définir des permissions claires, de demander des confirmations pour certaines actions et de mieux comprendre comment un modèle a utilisé un outil. Création de l'Agentic AI Foundation (AAF) : Pour superviser le développement du MCP, une nouvelle fondation indépendante et à but non lucratif a été créée. Cette fondation sera chargée de gouverner et de maintenir le protocole, garantissant qu'il reste ouvert et qu'il ne soit pas contrôlé par une seule entreprise. Une large coalition industrielle : L'Agentic AI Foundation est lancée avec le soutien de plusieurs acteurs majeurs de la technologie. Parmi les membres fondateurs figurent Anthropic, Google, Databricks, Zscaler, et d'autres entreprises, montrant une volonté commune d'établir un standard pour l'écosystème de l'IA. L'IA ne remplacera pas votre auto-complétion (et c'est tant mieux) https://www.damyr.fr/posts/ia-ne-remplacera-pas-vos-lsp/ Article d'opinion d'un SRE (Thomas du podcast DansLaTech): L'IA n'est pas efficace pour la complétion de code : L'auteur soutient que l'utilisation de l'IA pour la complétion de code basique est inefficace. Des outils plus anciens et spécialisés comme les LSP (Language Server Protocol) combinés aux snippets (morceaux de code réutilisables) sont bien plus rapides, personnalisables et performants pour les tâches répétitives. L'IA comme un "collègue" autonome : L'auteur utilise l'IA (comme Claude) comme un assistant externe à son éditeur de code. Il lui délègue des tâches complexes ou fastidieuses (corriger des bugs, mettre à jour une configuration, faire des reviews de code) qu'il peut exécuter en parallèle, agissant comme un agent autonome. L'IA comme un "canard en caoutchouc" surpuissant : L'IA est extrêmement efficace pour le débogage. Le simple fait de devoir formuler et contextualiser un problème pour l'IA aide souvent à trouver la solution soi-même. Quand ce n'est pas le cas, l'IA identifie très rapidement les erreurs "bêtes" qui peuvent faire perdre beaucoup de temps. Un outil pour accélérer les POCs et l'apprentissage : L'IA permet de créer des "preuves de concept" (POC) et des scripts d'automatisation jetables très rapidement, réduisant le coût et le temps investis. Elle est également un excellent outil pour apprendre et approfondir des sujets, notamment avec des outils comme NotebookLM de Google qui peuvent générer des résumés, des quiz ou des fiches de révision à partir de sources. Conclusion : Il faut utiliser l'IA là où elle excelle et ne pas la forcer dans des usages où des outils existants sont meilleurs. Plutôt que de l'intégrer partout de manière contre-productive, il faut l'adopter comme un outil spécialisé pour des tâches précises afin de gagner en efficacité. GPT 5.2 est sorti https://openai.com/index/introducing-gpt-5-2/ Nouveau modèle phare: GPT‑5.2 (Instant, Thinking, Pro) vise le travail professionnel et les agents long-courriers, avec de gros gains en raisonnement, long contexte, vision et appel d'outils. Déploiement dans ChatGPT (plans payants) et disponible dès maintenant via l'API . SOTA sur de nombreux benchmarks: GDPval (tâches de "knowledge work" sur 44 métiers): GPT‑5.2 Thinking gagne/égale 70,9% vs pros, avec production >11× plus rapide et = 0) Ils apportent une sémantique forte indépendamment des noms de variables Les Value Objects sont immuables et s'évaluent sur leurs valeurs, pas leur identité Les records Java permettent de créer des Value Objects mais avec un surcoût en mémoire Le projet Valhalla introduira les value based classes pour optimiser ces structures Les identifiants fortement typés évitent de confondre différents IDs de type Long ou UUID Pattern Strongly Typed IDs: utiliser PersonneID au lieu de Long pour identifier une personne Le modèle de domaine riche s'oppose au modèle de domaine anémique Les Value Objects auto-documentent le code et le rendent moins sujet aux erreurs Je trouve cela interessant ce que pourra faire bousculer les Value Objects. Est-ce que les value objects ameneront de la légerté dans l'execution Eviter la lourdeur du design est toujours ce qui m'a fait peut dans ces approches Méthodologies Retour d'experience de vibe coder une appli week end avec co-pilot http://blog.sunix.org/articles/howto/2025/11/14/building-gift-card-app-with-github-copilot.html on a deja parlé des approches de vibe coding cette fois c'est l'experience de Sun Et un des points differents c'es qu'on lui parle en ouvrant des tickets et donc on eput faire re reveues de code et copilot y bosse et il a fini son projet ! User Need VS Product Need https://blog.ippon.fr/2025/11/10/user-need-vs-product-need/ un article de nos amis de chez Ippon Distinction entre besoin utilisateur et besoin produit dans le développement digital Le besoin utilisateur est souvent exprimé comme une solution concrète plutôt que le problème réel Le besoin produit émerge après analyse approfondie combinant observation, données et vision stratégique Exemple du livreur Marc qui demande un vélo plus léger alors que son vrai problème est l'efficacité logistique La méthode des 5 Pourquoi permet de remonter à la racine des problèmes Les besoins proviennent de trois sources: utilisateurs finaux, parties prenantes business et contraintes techniques Un vrai besoin crée de la valeur à la fois pour le client et l'entreprise Le Product Owner doit traduire les demandes en problèmes réels avant de concevoir des solutions Risque de construire des solutions techniquement élégantes mais qui manquent leur cible Le rôle du product management est de concilier des besoins parfois contradictoires en priorisant la valeur Est ce qu'un EM doit coder ? https://www.modernleader.is/p/should-ems-write-code Pas de réponse unique : La question de savoir si un "Engineering Manager" (EM) doit coder n'a pas de réponse universelle. Cela dépend fortement du contexte de l'entreprise, de la maturité de l'équipe et de la personnalité du manager. Les risques de coder : Pour un EM, écrire du code peut devenir une échappatoire pour éviter les aspects plus difficiles du management. Cela peut aussi le transformer en goulot d'étranglement pour l'équipe et nuire à l'autonomie de ses membres s'il prend trop de place. Les avantages quand c'est bien fait : Coder sur des tâches non essentielles (amélioration d'outils, prototypage, etc.) peut aider l'EM à rester pertinent techniquement, à garder le contact avec la réalité de l'équipe et à débloquer des situations sans prendre le lead sur les projets. Le principe directeur : La règle d'or est de rester en dehors du chemin critique. Le code écrit par un EM doit servir à créer de l'espace pour son équipe, et non à en prendre. La vraie question à se poser : Plutôt que "dois-je coder ?", un EM devrait se demander : "De quoi mon équipe a-t-elle besoin de ma part maintenant, et est-ce que coder va dans ce sens ou est-ce un obstacle ?" Sécurité React2Shell — Grosse faille de sécurité avec React et Next.js, avec un CVE de niveau 10 https://x.com/rauchg/status/1997362942929440937?s=20 aussi https://react2shell.com/ "React2Shell" est le nom donné à une vulnérabilité de sécurité de criticité maximale (score 10.0/10.0), identifiée par le code CVE-2025-55182. Systèmes Affectés : La faille concerne les applications utilisant les "React Server Components" (RSC) côté serveur, et plus particulièrement les versions non patchées du framework Next.js. Risque Principal : Le risque est le plus élevé possible : l'exécution de code à distance (RCE). Un attaquant peut envoyer une requête malveillante pour exécuter n'importe quelle commande sur le serveur, lui en donnant potentiellement le contrôle total. Cause Technique : La vulnérabilité se situe dans le protocole "React Flight" (utilisé pour la communication client-serveur). Elle est due à une omission de vérifications de sécurité fondamentales (hasOwnProperty), permettant à une entrée utilisateur malveillante de tromper le serveur. Mécanisme de l'Exploit : L'attaque consiste à envoyer une charge utile (payload) qui exploite la nature dynamique de JavaScript pour : Faire passer un objet malveillant pour un objet interne de React. Forcer React à traiter cet objet comme une opération asynchrone (Promise). Finalement, accéder au constructeur de la classe Function de JavaScript pour exécuter du code arbitraire. Action Impérative : La seule solution fiable est de mettre à jour immédiatement les dépendances de React et Next.js vers les versions corrigées. Ne pas attendre. Mesures Secondaires : Bien que les pare-feux (firewalls) puissent aider à bloquer les formes connues de l'attaque, ils sont considérés comme insuffisants et ne remplacent en aucun cas la mise à jour des paquets. Découverte : La faille a été découverte par le chercheur en sécurité Lachlan Davidson, qui l'a divulguée de manière responsable pour permettre la création de correctifs. Loi, société et organisation Google autorise votre employeur à lire tous vos SMS professionnels https://www.generation-nt.com/actualites/google-android-rcs-messages-surveillance-employeur-2067012 Nouvelle fonctionnalité de surveillance : Google a déployé une fonctionnalité appelée "Android RCS Archival" qui permet aux employeurs d'intercepter, lire et archiver tous les messages RCS (et SMS) envoyés depuis les téléphones professionnels Android gérés par l'entreprise. Contournement du chiffrement : Bien que les messages RCS soient chiffrés de bout en bout pendant leur transit, cette nouvelle API permet à des logiciels de conformité (installés par l'employeur) d'accéder aux messages une fois qu'ils sont déchiffrés sur l'appareil. Le chiffrement devient donc inefficace contre cette surveillance. Réponse à une exigence légale : Cette mesure a été mise en place pour répondre aux exigences réglementaires, notamment dans le secteur financier, où les entreprises ont l'obligation légale de conserver une archive de toutes les communications professionnelles pour des raisons de conformité. Impact pour les employés : Un employé utilisant un téléphone Android fourni et géré par son entreprise pourra voir ses communications surveillées. Google précise cependant qu'une notification claire et visible informera l'utilisateur lorsque la fonction d'archivage est active. Téléphones personnels non concernés : Cette mesure ne s'applique qu'aux appareils "Android Enterprise" entièrement gérés par un employeur. Les téléphones personnels des employés ne sont pas affectés. Pour noel, faites un don à JUnit https://steady.page/en/junit/about JUnit est essentiel pour Java : C'est le framework de test le plus ancien et le plus utilisé par les développeurs Java. Son objectif est de fournir une base solide et à jour pour tous les types de tests côté développeur sur la JVM (Machine Virtuelle Java). Un projet maintenu par des bénévoles : JUnit est développé et maintenu par une équipe de volontaires passionnés sur leur temps libre (week-ends, soirées). Appel au soutien financier : La page est un appel aux dons de la part des utilisateurs (développeurs, entreprises) pour aider l'équipe à maintenir le rythme de développement. Le soutien financier n'est pas obligatoire, mais il permettrait aux mainteneurs de se consacrer davantage au projet. Objectif des fonds : Les dons serviraient principalement à financer des rencontres en personne pour les membres de l'équipe principale. L'idée est de leur permettre de travailler ensemble physiquement pendant quelques jours pour concevoir et coder plus efficacement. Pas de traitement de faveur : Il est clairement indiqué que devenir un sponsor ne donne aucun privilège sur la feuille de route du projet. On ne peut pas "acheter" de nouvelles fonctionnalités ou des corrections de bugs prioritaires. Le projet restera ouvert et collaboratif sur GitHub. Reconnaissance des donateurs : En guise de remerciement, les noms (et logos pour les entreprises) des donateurs peuvent être affichés sur le site officiel de JUnit. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 5 juin 2026 : TechReady - Nantes (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Modern CTO with Joel Beasley
Transforming IT at Hewlett Packard Enterprise through Agentic AI with Brian Gruttadauria, CTO of Hybrid Cloud

Modern CTO with Joel Beasley

Play Episode Listen Later Dec 4, 2025 48:07


Gartner placed them in the highest corner of their Magic Quadrant. Why is HPE leading their industry? Today, we're talking to Brian Gruttadauria, CTO of Hybrid Cloud at Hewlett Packard Enterprise. We discuss how agentic AI is transforming hybrid cloud infrastructure, why human-in-the-loop will remain critical for enterprise AI adoption, and how HPE went from 20% to 92% GitHub Copilot adoption in just over a year. All of this right here, right now, on the Modern CTO Podcast!  To get learn more about HPE, check out their website here.

Dev Interrupted
Are developers happy yet? Unpacking the 2025 Developer Survey | Stack Overflow's Erin Yepis

Dev Interrupted

Play Episode Listen Later Dec 2, 2025 59:58


After hitting a low point last year, developer job satisfaction is officially on the rise. Erin Yepis returns to the show to unpack the 2025 Stack Overflow Developer Survey, analyzing how autonomy and compensation are driving this recovery. We also cover the happiness gap between senior and junior engineers, the surprising drop in trust for AI tools, and why vibe coding is failing to catch on with professional engineers.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Read the full report: 2025 Stack Overflow Developer SurveyStack Overflow Blog: Read Erin's analysis and moreErin Yepis: Connect on LinkedInOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Scrum Master Toolbox Podcast
AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding | Lou Franco

Scrum Master Toolbox Podcast

Play Episode Listen Later Nov 25, 2025 37:13


AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes."   Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time."   Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt."   Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up."   As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes."   Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want."   Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity."   Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it."   When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast."   Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources.    About Lou Franco   Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.   You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.

Dev Interrupted
From Kubernetes to AI maximalism | Stacklok's Craig McLuckie

Dev Interrupted

Play Episode Listen Later Nov 25, 2025 55:29


When you co-create Kubernetes, you earn the right to have strong opinions on the next platform shift. This week, Ben sits down with Craig McLuckie, Co-founder & CEO of Stacklok, who is advocating for a shift in leadership mindset. He argues we need to move from asking if we can use AI to demanding to know why we can't. Listen to hear why he believes an "AI maximalist" philosophy is the only way to survive the next cycle.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Connect with Craig McLuckie: LinkedInCheck out Stacklok: Stacklok WebsiteOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Windows Weekly (MP3)
WW 959: Thurrott Syndrome - Microsoft Faces AI Backlash as Windows 11 Evolves

Windows Weekly (MP3)

Play Episode Listen Later Nov 19, 2025 150:40


Ahead of Microsoft Ignite 2025, Windows boss Pavan Davuluri tweeted an innocuous post about nothing, and all hell broke loose. We are broken as a community and it's time to cull the herd. Ignite 2025 Fun aside: Google could have announced Gemini 3 at any time, but they chose the opening day of Ignite. Who's dancing now? No Satya and suddenly the keynote is watchable again Microsoft brings Anthropic models to Foundry along with Nvidia architecture MCP comes to Windows 11 in public preview for developers New Microsoft 365 Copilot agents for Word, Excel, and PowerPoint Agent 365 is the obvious name of an AI agent management service Windows 11 is getting agents on the Taskbar because it isn't annoying enough already Windows 11 Two new Release Preview builds, a new Canary build, and the first release of Copilot Actions The RP builds are a preview of Patch Tuesday in December, it's bigger than expected Dev/Beta build with experimental AI agent capabilities, more AI OpenAI released ChatGPT 5.1 and it's like no one noticed Mozilla announces AI window for Firefox, with immediate backlash Xbox and gaming Qualcomm JUST announced a new control panel for Snapdragon X gaming Hands-on with the Xbox Full Screen Experience (FSE) for Windows 11 FSE Transforms a gaming handheld PC into a device-like experience Frame rates see a dramatic jump in FSE Call of Duty, which was surprising Fortnite is coming to the Xbox app in Windows, adding Xbox Play Anywhere support Xbox announces a new set of titles coming to Game Pass across platforms Xbox Partner Preview event is set for November 20 As predicted, Steam Machine is the "Xbox Microsoft wanted to make." Yes, it's a good idea now that someone else is doing it Tips and picks Tip of the week: Tiny11 Builder, again Hardware pick of the week: Lenovo Legion Go 2 RunAs Radio this week: Azure SRE Agents with Deepthi Chelupati Brown liquor pick of the week: Jameson Rarest Vintage Reserve 2007 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit helixsleep.com/windows framer.com/design promo code WW

All TWiT.tv Shows (MP3)
Windows Weekly 959: Thurrott Syndrome

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 19, 2025 150:07 Transcription Available


Ahead of Microsoft Ignite 2025, Windows boss Pavan Davuluri tweeted an innocuous post about nothing, and all hell broke loose. We are broken as a community and it's time to cull the herd. Ignite 2025 Fun aside: Google could have announced Gemini 3 at any time, but they chose the opening day of Ignite. Who's dancing now? No Satya and suddenly the keynote is watchable again Microsoft brings Anthropic models to Foundry along with Nvidia architecture MCP comes to Windows 11 in public preview for developers New Microsoft 365 Copilot agents for Word, Excel, and PowerPoint Agent 365 is the obvious name of an AI agent management service Windows 11 is getting agents on the Taskbar because it isn't annoying enough already Windows 11 Two new Release Preview builds, a new Canary build, and the first release of Copilot Actions The RP builds are a preview of Patch Tuesday in December, it's bigger than expected Dev/Beta build with experimental AI agent capabilities, more AI OpenAI released ChatGPT 5.1 and it's like no one noticed Mozilla announces AI window for Firefox, with immediate backlash Xbox and gaming Qualcomm JUST announced a new control panel for Snapdragon X gaming Hands-on with the Xbox Full Screen Experience (FSE) for Windows 11 FSE Transforms a gaming handheld PC into a device-like experience Frame rates see a dramatic jump in FSE Call of Duty, which was surprising Fortnite is coming to the Xbox app in Windows, adding Xbox Play Anywhere support Xbox announces a new set of titles coming to Game Pass across platforms Xbox Partner Preview event is set for November 20 As predicted, Steam Machine is the "Xbox Microsoft wanted to make." Yes, it's a good idea now that someone else is doing it Tips and picks Tip of the week: Tiny11 Builder, again Hardware pick of the week: Lenovo Legion Go 2 RunAs Radio this week: Azure SRE Agents with Deepthi Chelupati Brown liquor pick of the week: Jameson Rarest Vintage Reserve 2007 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit helixsleep.com/windows framer.com/design promo code WW

Radio Leo (Audio)
Windows Weekly 959: Thurrott Syndrome

Radio Leo (Audio)

Play Episode Listen Later Nov 19, 2025 150:42 Transcription Available


Ahead of Microsoft Ignite 2025, Windows boss Pavan Davuluri tweeted an innocuous post about nothing, and all hell broke loose. We are broken as a community and it's time to cull the herd. Ignite 2025 Fun aside: Google could have announced Gemini 3 at any time, but they chose the opening day of Ignite. Who's dancing now? No Satya and suddenly the keynote is watchable again Microsoft brings Anthropic models to Foundry along with Nvidia architecture MCP comes to Windows 11 in public preview for developers New Microsoft 365 Copilot agents for Word, Excel, and PowerPoint Agent 365 is the obvious name of an AI agent management service Windows 11 is getting agents on the Taskbar because it isn't annoying enough already Windows 11 Two new Release Preview builds, a new Canary build, and the first release of Copilot Actions The RP builds are a preview of Patch Tuesday in December, it's bigger than expected Dev/Beta build with experimental AI agent capabilities, more AI OpenAI released ChatGPT 5.1 and it's like no one noticed Mozilla announces AI window for Firefox, with immediate backlash Xbox and gaming Qualcomm JUST announced a new control panel for Snapdragon X gaming Hands-on with the Xbox Full Screen Experience (FSE) for Windows 11 FSE Transforms a gaming handheld PC into a device-like experience Frame rates see a dramatic jump in FSE Call of Duty, which was surprising Fortnite is coming to the Xbox app in Windows, adding Xbox Play Anywhere support Xbox announces a new set of titles coming to Game Pass across platforms Xbox Partner Preview event is set for November 20 As predicted, Steam Machine is the "Xbox Microsoft wanted to make." Yes, it's a good idea now that someone else is doing it Tips and picks Tip of the week: Tiny11 Builder, again Hardware pick of the week: Lenovo Legion Go 2 RunAs Radio this week: Azure SRE Agents with Deepthi Chelupati Brown liquor pick of the week: Jameson Rarest Vintage Reserve 2007 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit helixsleep.com/windows framer.com/design promo code WW

Windows Weekly (Video HI)
WW 959: Thurrott Syndrome - Microsoft Faces AI Backlash as Windows 11 Evolves

Windows Weekly (Video HI)

Play Episode Listen Later Nov 19, 2025 149:40


Ahead of Microsoft Ignite 2025, Windows boss Pavan Davuluri tweeted an innocuous post about nothing, and all hell broke loose. We are broken as a community and it's time to cull the herd. Ignite 2025 Fun aside: Google could have announced Gemini 3 at any time, but they chose the opening day of Ignite. Who's dancing now? No Satya and suddenly the keynote is watchable again Microsoft brings Anthropic models to Foundry along with Nvidia architecture MCP comes to Windows 11 in public preview for developers New Microsoft 365 Copilot agents for Word, Excel, and PowerPoint Agent 365 is the obvious name of an AI agent management service Windows 11 is getting agents on the Taskbar because it isn't annoying enough already Windows 11 Two new Release Preview builds, a new Canary build, and the first release of Copilot Actions The RP builds are a preview of Patch Tuesday in December, it's bigger than expected Dev/Beta build with experimental AI agent capabilities, more AI OpenAI released ChatGPT 5.1 and it's like no one noticed Mozilla announces AI window for Firefox, with immediate backlash Xbox and gaming Qualcomm JUST announced a new control panel for Snapdragon X gaming Hands-on with the Xbox Full Screen Experience (FSE) for Windows 11 FSE Transforms a gaming handheld PC into a device-like experience Frame rates see a dramatic jump in FSE Call of Duty, which was surprising Fortnite is coming to the Xbox app in Windows, adding Xbox Play Anywhere support Xbox announces a new set of titles coming to Game Pass across platforms Xbox Partner Preview event is set for November 20 As predicted, Steam Machine is the "Xbox Microsoft wanted to make." Yes, it's a good idea now that someone else is doing it Tips and picks Tip of the week: Tiny11 Builder, again Hardware pick of the week: Lenovo Legion Go 2 RunAs Radio this week: Azure SRE Agents with Deepthi Chelupati Brown liquor pick of the week: Jameson Rarest Vintage Reserve 2007 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit helixsleep.com/windows framer.com/design promo code WW

All TWiT.tv Shows (Video LO)
Windows Weekly 959: Thurrott Syndrome

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 19, 2025 149:40 Transcription Available


Ahead of Microsoft Ignite 2025, Windows boss Pavan Davuluri tweeted an innocuous post about nothing, and all hell broke loose. We are broken as a community and it's time to cull the herd. Ignite 2025 Fun aside: Google could have announced Gemini 3 at any time, but they chose the opening day of Ignite. Who's dancing now? No Satya and suddenly the keynote is watchable again Microsoft brings Anthropic models to Foundry along with Nvidia architecture MCP comes to Windows 11 in public preview for developers New Microsoft 365 Copilot agents for Word, Excel, and PowerPoint Agent 365 is the obvious name of an AI agent management service Windows 11 is getting agents on the Taskbar because it isn't annoying enough already Windows 11 Two new Release Preview builds, a new Canary build, and the first release of Copilot Actions The RP builds are a preview of Patch Tuesday in December, it's bigger than expected Dev/Beta build with experimental AI agent capabilities, more AI OpenAI released ChatGPT 5.1 and it's like no one noticed Mozilla announces AI window for Firefox, with immediate backlash Xbox and gaming Qualcomm JUST announced a new control panel for Snapdragon X gaming Hands-on with the Xbox Full Screen Experience (FSE) for Windows 11 FSE Transforms a gaming handheld PC into a device-like experience Frame rates see a dramatic jump in FSE Call of Duty, which was surprising Fortnite is coming to the Xbox app in Windows, adding Xbox Play Anywhere support Xbox announces a new set of titles coming to Game Pass across platforms Xbox Partner Preview event is set for November 20 As predicted, Steam Machine is the "Xbox Microsoft wanted to make." Yes, it's a good idea now that someone else is doing it Tips and picks Tip of the week: Tiny11 Builder, again Hardware pick of the week: Lenovo Legion Go 2 RunAs Radio this week: Azure SRE Agents with Deepthi Chelupati Brown liquor pick of the week: Jameson Rarest Vintage Reserve 2007 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit helixsleep.com/windows framer.com/design promo code WW

PodRocket - A web development podcast from LogRocket
GitHub's Octoverse: TypeScript, Copilot, and Open Source Struggles

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Nov 13, 2025 49:06


In this episode of PodRocket, Jack and Paige dive into the latest GitHub Octoverse report, covering trends like shipping faster with AI, the dominance of TypeScript as the top language, the rise of AI-generated pull requests, and the concerning drop in code review comments. They unpack the growing role of Copilot, the tension between OSS contributions and burnout, and the surge in AI infrastructure projects like Ollama. The discussion also touches on open source governance, the docs gap, prompt injection risks, and whether AI-powered browsers can succeed beyond the dev crowd. Links Resources Octoverse: A new developer joins GitHub every second as AI leads TypeScript to #1: https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com (mailto:elizabeth.becz@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Check out our newsletter (https://blog.logrocket.com/the-replay-newsletter/)! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Chapters 01:15 – What is GitHub's Octoverse Report? 02:15 – Shipping Faster with AI 03:45 – Copilot's Impact on Code Quality 05:15 – TypeScript Takes the Lead 06:30 – Concerns About AI PR Volume 07:45 – Decline in Code Reviews 09:15 – OSS Maintenance Crisis 11:00 – GitHub Copilot and Funding OSS 12:30 – Where AI Actually Helps Devs 14:00 – Small Models and Running Locally 16:00 – TypeScript vs Python: Stack Implications 18:30 – Language Trends and AI Consolidation 21:00 – Framework and Stack Fragility in AI Era 24:00 – Docs Gap in OSS Projects 26:30 – Open Source Governance and Security Gaps 30:00 – AI Infrastructure Projects Leading GitHub 33:00 – Will AI Browsers Catch On? 35:00 – Prompt Injection and Security Risks 37:00 – Opportunity in OSS Documentation 39:30 – Final Thoughts and Hot Takes Special Guest: Jack Herrington.

Working Code
236: Trunk or Treat

Working Code

Play Episode Listen Later Nov 3, 2025 47:57 Transcription Available


In this week's episode the gather round and share what they've been up to for trunk or treat. Adam shares his waning motivation for his Jump Run side project, we explore sustainable motivation, the rewrite temptation, and whether it's okay to just... do the fun thing sometimes. Meanwhile, Tim provides a reality check on AI coding tools—he spent real hours comparing GitHub Copilot and Codex on actual work, and the results are messier than the hype suggests.Follow the show and be sure to join the discussion on Discord! Our website is workingcode.dev and we're @workingcode.dev on Bluesky. New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.With audio editing and engineering by ZCross Media.Full show notes and transcript here.

Scouting for Growth
Agentic Frontier: Re-imagining Enterprise AI with EY x Microsoft

Scouting for Growth

Play Episode Listen Later Oct 30, 2025 43:22


On this episode of the Scouting For Growth podcast, Sabine VdL talks to Ulrich (Uli) Homann, Corporate Vice President, Microsoft, and Mark Luquire, EY Global Microsoft Alliance Co-innovation Leader, about how to build an agentic AI enterprise that doesn't just work faster, but works smarter and, most importantly, works for everyone. KEY TAKEAWAYS In the past automation has been very task driven and specific, things had to go in a certain order and you needed to know that order ahead of time. While you need some of that with generative AI, we now have a system that can help do some of that thinking, so if things change in the process along the way, you can deal with it. Now you can rethink what processes even need to exist and focus on the outcome and how to get to it in a new way.  By giving everyone at EY access to generative AI a couple of years ago we learned that people were able to accomplish more more quickly. They used it as a thought-partner, used it as a way to fine tune the product they were working on. Being able to see the evolution of generative AI to now where it's coding applications on its own almost, seeing the new agent capabilities and tools, and being able to take action on its own with very little prompting, it opens the doors to possibilities and what you'll be able to do in the future.  BEST MOMENTS  ‘Focus on where you want to be and then rethink how you're going to get there, that's the real key.' ‘It's not just an assistant to you, providing you with information, it's actually taking on work it's actually thinking through and processing those things as well.' ABOUT THE GUESTS Ulrich (Uli) Homann is a Corporate Vice President & Distinguished Architect in the Cloud + AI business at Microsoft. As part of the senior engineering leadership team, he's responsible for the customer-led innovation efforts across the cloud and enterprise platform portfolio. Previously Homann was the Chief Architect for Microsoft worldwide enterprise services, having formerly played a key role in the business' newly formed Platforms, Technology and Strategy Group. Prior to joining Microsoft in 1991, he worked for several small consulting companies, where he designed and developed distributed systems and has spent most of his career using well-defined applications and architectures to simplify and streamline the development of business applications. Mark Luquire leads the EY organization's global efforts to co-develop innovative solutions with Microsoft and clients, driving growth and accelerating technology strategy. He oversees cross-functional teams spanning sectors and service lines, serving as a key liaison to Microsoft's product and engineering teams. Previously, Mark headed Platform Adoption for EY Global, leading enterprise-wide AI and cloud enablement, including integrating generative AI tools like EYQ, GitHub Copilot and Microsoft Copilot. He also created the first EY Global DevOps Practice and led cloud transformation efforts, making EY a leader in Microsoft Azure usage. Mark's career includes leadership roles in large healthcare enterprises and technology startups, where he established scalable operations, spearheaded digital transformation, and built high-performing global teams. ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook  TikTok Email Website This Podcast has been brought to you by Disruptive Media. https://disruptivemedia.co.uk/

Windows Weekly (MP3)
WW 956: Blowing the Dust Off Skype - Azure's Front Door Leaves Customers Locked Out

Windows Weekly (MP3)

Play Episode Listen Later Oct 29, 2025 169:10


Welp, Azure crashed on Microsoft's earnings day, the cloud's weakest link exposed just as AI investments hit mind-boggling numbers. And 2.5 years into the AI era, things are still moving quickly, and there are extreme opinions on both ends of the spectrum. But Paul finally found a source for a good way to evaluate AI and figure out where it works and where it does not. It came from an unexpected place.Windows 11 Week D arrives with a massive Preview Update for 24H2 and 25H2 - including the new Start menu, finally Copy & Search, Voice typing improvements, Proactive Memory Diagnostics, more in Dev and Beta Copilot Vision in Copilot app updated with text input and output across all Insider channels Intel earnings are great unless you understand how numbers work Microsoft 365 Australia regulator sues Microsoft over misleading Microsoft 365 consumer pricing Copilot is being integrated into the People, Files, and Calendar companion apps for Microsoft 365 commercial On the day Microsoft will report earnings, Microsoft 365 and Azure went down. Hilarious! AI OpenAI completes its transition to a for-profit owned by a non-profit Microsoft's stake is 27 percent. A lot has changed in the Microsoft/OpenAI partnership agreement WSJ finally calls out Microsoft for its lack of financial reporting transparency. Paul's been complaining about that for over a decade - Big Tech has became a shell game. These companies are managing money they don't even have and actual products and services and "real" value be damned Big Copilot feature dump for consumers with a human touch: Mico, Copilot Groups, memory improvements, connectors, Proactive Actions in preview, Copilot for Health, Copilot in Edge improvements, and Copilot in Windows updates from last week Microsoft 365 Copilot is getting App Builder and Workflow agents GitHub Copilot to support third-party AI agents Grammarly rebrands as Superhuman Xbox and gaming Credible report claims Microsoft requires Xbox/Microsoft Gaming to deliver 30 percent profit margin That is impossible and this is clearly coming from Amy Hood and has led to the ensh*ttification of Xbox as a platform As Microsoft launches first gaming handhelds, all anyone wants to talk about is the next-generation Xbox console. It started with Sarah Bond last week - "very premium" console with "curated" experience Phil Spencer discusses it this week, who implied Windows at the heart of console The rumor mill churns up - Will be Windows, as we've said, will drop multiplayer paywall that debuted in 2002 Now Satya Nadella is commenting on the next console, confirms publisher focus for this business Halo: Campaign Evolved is coming in 2026, new features, new Unreal Engine graphics, new PS5 compatibility Also, The Outer Worlds 2 is now available. Yes, on PS5 too Amazon relaunches Luna, and the new Amazon layoffs point to a new focus on casual gaming Tips and picks Tip of the week: Understand where AI works and where AI is just a marketing term used to hype something that doesn't work App pick of the week: Tiny11 Builder RunAs Radio this week: AI for DBAs with Grant Fritchey Brown liquor pick of the week: Redbreast Dream Casks These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/956 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: framer.com/design promo code WW auraframes.com/ink ventionteams.com/twit cachefly.com/twit

All TWiT.tv Shows (MP3)
Windows Weekly 956: Blowing the Dust Off Skype

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 29, 2025 168:10


Welp, Azure crashed on Microsoft's earnings day, the cloud's weakest link exposed just as AI investments hit mind-boggling numbers. And 2.5 years into the AI era, things are still moving quickly, and there are extreme opinions on both ends of the spectrum. But Paul finally found a source for a good way to evaluate AI and figure out where it works and where it does not. It came from an unexpected place.Windows 11 Week D arrives with a massive Preview Update for 24H2 and 25H2 - including the new Start menu, finally Copy & Search, Voice typing improvements, Proactive Memory Diagnostics, more in Dev and Beta Copilot Vision in Copilot app updated with text input and output across all Insider channels Intel earnings are great unless you understand how numbers work Microsoft 365 Australia regulator sues Microsoft over misleading Microsoft 365 consumer pricing Copilot is being integrated into the People, Files, and Calendar companion apps for Microsoft 365 commercial On the day Microsoft will report earnings, Microsoft 365 and Azure went down. Hilarious! AI OpenAI completes its transition to a for-profit owned by a non-profit Microsoft's stake is 27 percent. A lot has changed in the Microsoft/OpenAI partnership agreement WSJ finally calls out Microsoft for its lack of financial reporting transparency. Paul's been complaining about that for over a decade - Big Tech has became a shell game. These companies are managing money they don't even have and actual products and services and "real" value be damned Big Copilot feature dump for consumers with a human touch: Mico, Copilot Groups, memory improvements, connectors, Proactive Actions in preview, Copilot for Health, Copilot in Edge improvements, and Copilot in Windows updates from last week Microsoft 365 Copilot is getting App Builder and Workflow agents GitHub Copilot to support third-party AI agents Grammarly rebrands as Superhuman Xbox and gaming Credible report claims Microsoft requires Xbox/Microsoft Gaming to deliver 30 percent profit margin That is impossible and this is clearly coming from Amy Hood and has led to the ensh*ttification of Xbox as a platform As Microsoft launches first gaming handhelds, all anyone wants to talk about is the next-generation Xbox console. It started with Sarah Bond last week - "very premium" console with "curated" experience Phil Spencer discusses it this week, who implied Windows at the heart of console The rumor mill churns up - Will be Windows, as we've said, will drop multiplayer paywall that debuted in 2002 Now Satya Nadella is commenting on the next console, confirms publisher focus for this business Halo: Campaign Evolved is coming in 2026, new features, new Unreal Engine graphics, new PS5 compatibility Also, The Outer Worlds 2 is now available. Yes, on PS5 too Amazon relaunches Luna, and the new Amazon layoffs point to a new focus on casual gaming Tips and picks Tip of the week: Understand where AI works and where AI is just a marketing term used to hype something that doesn't work App pick of the week: Tiny11 Builder RunAs Radio this week: AI for DBAs with Grant Fritchey Brown liquor pick of the week: Redbreast Dream Casks These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/956 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: framer.com/design promo code WW auraframes.com/ink ventionteams.com/twit cachefly.com/twit

Radio Leo (Audio)
Windows Weekly 956: Blowing the Dust Off Skype

Radio Leo (Audio)

Play Episode Listen Later Oct 29, 2025 167:39 Transcription Available


Welp, Azure crashed on Microsoft's earnings day, the cloud's weakest link exposed just as AI investments hit mind-boggling numbers. And 2.5 years into the AI era, things are still moving quickly, and there are extreme opinions on both ends of the spectrum. But Paul finally found a source for a good way to evaluate AI and figure out where it works and where it does not. It came from an unexpected place.Windows 11 Week D arrives with a massive Preview Update for 24H2 and 25H2 - including the new Start menu, finally Copy & Search, Voice typing improvements, Proactive Memory Diagnostics, more in Dev and Beta Copilot Vision in Copilot app updated with text input and output across all Insider channels Intel earnings are great unless you understand how numbers work Microsoft 365 Australia regulator sues Microsoft over misleading Microsoft 365 consumer pricing Copilot is being integrated into the People, Files, and Calendar companion apps for Microsoft 365 commercial On the day Microsoft will report earnings, Microsoft 365 and Azure went down. Hilarious! AI OpenAI completes its transition to a for-profit owned by a non-profit Microsoft's stake is 27 percent. A lot has changed in the Microsoft/OpenAI partnership agreement WSJ finally calls out Microsoft for its lack of financial reporting transparency. Paul's been complaining about that for over a decade - Big Tech has became a shell game. These companies are managing money they don't even have and actual products and services and "real" value be damned Big Copilot feature dump for consumers with a human touch: Mico, Copilot Groups, memory improvements, connectors, Proactive Actions in preview, Copilot for Health, Copilot in Edge improvements, and Copilot in Windows updates from last week Microsoft 365 Copilot is getting App Builder and Workflow agents GitHub Copilot to support third-party AI agents Grammarly rebrands as Superhuman Xbox and gaming Credible report claims Microsoft requires Xbox/Microsoft Gaming to deliver 30 percent profit margin That is impossible and this is clearly coming from Amy Hood and has led to the ensh*ttification of Xbox as a platform As Microsoft launches first gaming handhelds, all anyone wants to talk about is the next-generation Xbox console. It started with Sarah Bond last week - "very premium" console with "curated" experience Phil Spencer discusses it this week, who implied Windows at the heart of console The rumor mill churns up - Will be Windows, as we've said, will drop multiplayer paywall that debuted in 2002 Now Satya Nadella is commenting on the next console, confirms publisher focus for this business Halo: Campaign Evolved is coming in 2026, new features, new Unreal Engine graphics, new PS5 compatibility Also, The Outer Worlds 2 is now available. Yes, on PS5 too Amazon relaunches Luna, and the new Amazon layoffs point to a new focus on casual gaming Tips and picks Tip of the week: Understand where AI works and where AI is just a marketing term used to hype something that doesn't work App pick of the week: Tiny11 Builder RunAs Radio this week: AI for DBAs with Grant Fritchey Brown liquor pick of the week: Redbreast Dream Casks These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/956 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: framer.com/design promo code WW auraframes.com/ink ventionteams.com/twit cachefly.com/twit

Windows Weekly (Video HI)
WW 956: Blowing the Dust Off Skype - Azure's Front Door Leaves Customers Locked Out

Windows Weekly (Video HI)

Play Episode Listen Later Oct 29, 2025 167:39


Welp, Azure crashed on Microsoft's earnings day, the cloud's weakest link exposed just as AI investments hit mind-boggling numbers. And 2.5 years into the AI era, things are still moving quickly, and there are extreme opinions on both ends of the spectrum. But Paul finally found a source for a good way to evaluate AI and figure out where it works and where it does not. It came from an unexpected place.Windows 11 Week D arrives with a massive Preview Update for 24H2 and 25H2 - including the new Start menu, finally Copy & Search, Voice typing improvements, Proactive Memory Diagnostics, more in Dev and Beta Copilot Vision in Copilot app updated with text input and output across all Insider channels Intel earnings are great unless you understand how numbers work Microsoft 365 Australia regulator sues Microsoft over misleading Microsoft 365 consumer pricing Copilot is being integrated into the People, Files, and Calendar companion apps for Microsoft 365 commercial On the day Microsoft will report earnings, Microsoft 365 and Azure went down. Hilarious! AI OpenAI completes its transition to a for-profit owned by a non-profit Microsoft's stake is 27 percent. A lot has changed in the Microsoft/OpenAI partnership agreement WSJ finally calls out Microsoft for its lack of financial reporting transparency. Paul's been complaining about that for over a decade - Big Tech has became a shell game. These companies are managing money they don't even have and actual products and services and "real" value be damned Big Copilot feature dump for consumers with a human touch: Mico, Copilot Groups, memory improvements, connectors, Proactive Actions in preview, Copilot for Health, Copilot in Edge improvements, and Copilot in Windows updates from last week Microsoft 365 Copilot is getting App Builder and Workflow agents GitHub Copilot to support third-party AI agents Grammarly rebrands as Superhuman Xbox and gaming Credible report claims Microsoft requires Xbox/Microsoft Gaming to deliver 30 percent profit margin That is impossible and this is clearly coming from Amy Hood and has led to the ensh*ttification of Xbox as a platform As Microsoft launches first gaming handhelds, all anyone wants to talk about is the next-generation Xbox console. It started with Sarah Bond last week - "very premium" console with "curated" experience Phil Spencer discusses it this week, who implied Windows at the heart of console The rumor mill churns up - Will be Windows, as we've said, will drop multiplayer paywall that debuted in 2002 Now Satya Nadella is commenting on the next console, confirms publisher focus for this business Halo: Campaign Evolved is coming in 2026, new features, new Unreal Engine graphics, new PS5 compatibility Also, The Outer Worlds 2 is now available. Yes, on PS5 too Amazon relaunches Luna, and the new Amazon layoffs point to a new focus on casual gaming Tips and picks Tip of the week: Understand where AI works and where AI is just a marketing term used to hype something that doesn't work App pick of the week: Tiny11 Builder RunAs Radio this week: AI for DBAs with Grant Fritchey Brown liquor pick of the week: Redbreast Dream Casks These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/956 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: framer.com/design promo code WW auraframes.com/ink ventionteams.com/twit cachefly.com/twit

Hanselminutes - Fresh Talk and Tech for Developers
AI-Powered Migration plus Raw Experience with Mike Rousos

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later Oct 23, 2025 36:25


On this episode of Hanselminutes, Scott Hanselman talks with cloud migration and app modernization expert Mike Rousos about the challenges and opportunities of bringing decades-old applications into the modern era. They discuss practical strategies for app modernization, how AI and GitHub Copilot are reshaping developer workflows, and what it takes to transform legacy software into systems ready for the future.

Lenny's Podcast: Product | Growth | Career
How to measure AI developer productivity in 2025 | Nicole Forsgren

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Oct 19, 2025 67:48


Nicole Forsgren created the most widely used frameworks for measuring developer productivity—DORA and SPACE. She wrote the foundational book Accelerate and is about to release her newest book, Frictionless, a practical guide for helping teams move faster in the AI era. She's currently Senior Director of Developer Intelligence at Google.We discuss:1. Why most productivity metrics are a lie2. Signs that your engineering team could be moving much faster3. Why AI accelerates coding but developers aren't speeding up as much as you think4. AI's impact on engineers getting into “flow”5. Her framework for building and scaling a developer experience team6. The three components of developer experience: flow state, cognitive load, and feedback loops—Brought to you by:Mercury—The art of simplified finances: https://mercury.com/WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs: https://workos.com/lennyCoda—The all-in-one collaborative workspace: https://coda.io/lenny—Where to find Nicole Forsgren:• Twitter: https://twitter.com/nicolefv• LinkedIn: https://www.linkedin.com/in/nicolefv/• Website: https://nicolefv.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Nicole Forsgren(05:09) The concept of developer experience (DevEx)(08:33) Flow state and cognitive load in the age of AI(12:02) Challenges in measuring productivity with AI(21:19) The importance of developer experience for business value(22:20) Common issues and solutions in developer experience(26:49) Signs your eng team is moving too slow(29:52) How AI is improving productivity(33:32) Real examples of productivity improvements(36:35) Introducing her new book, Frictionless(43:40) How to get started building a DevEx team(45:15) The impact of forming developer experience teams(46:15)  How to measure the impact of DevEx teams(48:53) Measuring the impact of AI tools on productivity(55:16) Survey design for developer experience(57:59) Popular AI tools for developers(59:08) Bringing a product mindset to DevEx improvements(01:00:40) AI corner(01:02:33) Lightning round and final thoughts—Referenced:• How to measure and improve developer productivity | Nicole Forsgren (Microsoft Research, GitHub, Google): https://www.lennysnewsletter.com/p/how-to-measure-and-improve-developer• DORA: https://dora.dev/• The SPACE framework: A comprehensive guide to developer productivity: https://getdx.com/blog/space-metrics/• Measuring developer productivity with the DX Core 4: https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/• Gloria Mark's website: https://gloriamark.com/• Taking Flight with Copilot: https://dl.acm.org/doi/10.1145/3589996• DevEx in Action: https://spawn-queue.acm.org/doi/10.1145/3639443• CodeX: https://openai.com/codex/• Devin: https://devin.ai/• Abi Noda on LinkedIn: https://www.linkedin.com/in/abinoda/• DX is joining Atlassian: https://getdx.com/blog/dx-is-joining-atlassian/• GitHub Copilot: https://github.com/features/copilot• Cursor: https://cursor.com/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Gemini Code Assist: https://codeassist.google/• Claude Code: https://www.claude.com/product/claude-code• The AI-native startup: 5 products, 7-figure revenue, 100% AI-written code | Dan Shipper (co-founder/CEO of Every): https://www.lennysnewsletter.com/p/inside-every-dan-shipper• Love Is Blind on Netflix: https://www.netflix.com/title/80996601• Shrinking on AppleTV+: https://tv.apple.com/us/show/shrinking/umc.cmc.apzybj6eqf6pzccd97kev7bs• Ninja Creami: https://www.amazon.com/Ninja-NC301-CREAMi-Containers-Bundle/dp/B0BLGR5JPV/• Jura coffee maker: https://www.amazon.com/Jura-Nordic-Automatic-Coffee-Machine/dp/B0CF65BFZ1/—Recommended books:• Frictionless: https://developerexperiencebook.com/• DevEx Workbook: https://developerexperiencebook.com/#workbook• Outlive: The Science and Art of Longevity: https://www.amazon.com/Outlive-Longevity-Peter-Attia-MD/dp/0593236599• Back Mechanic: https://www.amazon.com/Back-Mechanic-Stuart-McGill-2015-09-30/dp/B01FKSGJYC• How Big Things Get Done: The Surprising Factors That Determine the Fate of Every Project, from Home Renovations to Space Exploration and Everything in Between: https://www.amazon.com/How-Big-Things-Get-Done/dp/0593239512/• The Undoing Project: A Friendship That Changed Our Minds: https://www.amazon.com/dp/B01KBM82M4/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Lenny's Podcast: Product | Growth | Career
How to find hidden growth opportunities in your product | Albert Cheng (Duolingo, Grammarly, Chess.com)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Oct 5, 2025 85:25


Albert Cheng has led growth at three of the world's most successful consumer subscription companies: Duolingo, Grammarly, and Chess.com. A former Google product manager (and serious pianist!), Albert developed a unique approach to finding and scaling growth opportunities through rapid experimentation and deep user psychology. His teams run 1,000 experiments a year, discovering counterintuitive insights that have driven tens of millions in revenue.What you'll learn:1. How to use the explore-exploit framework to find new growth opportunities2. How showing premium features to free users doubled Grammarly's upgrades to paid plans3. What good retention looks like for a consumer subscription app4. Why resurrected users drive 80% of mature product growth5. Why “reverse trials” work better than time-based trials6. The three pillars of successful gamification: core loop, metagame, and profile —Brought to you by:Vanta—Automate compliance. Simplify security.Jira Product Discovery—Confidence to build the right thingMiro—A collaborative visual platform where your best work comes to life—Where to find Albert Cheng:• X: https://x.com/albertc248• LinkedIn: https://www.linkedin.com/in/albertcheng1/• Chess.com: https://www.chess.com/member/Goniners—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Referenced:• How Duolingo reignited user growth: https://www.lennysnewsletter.com/p/how-duolingo-reignited-user-growth• Inside ChatGPT: The fastest-growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• Explore vs. Exploit: https://brianbalfour.com/quick-takes/explore-vs-exploit• Grammarly: https://www.grammarly.com/• Reforge: https://www.reforge.com/• Chess.com: https://www.chess.com/• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder & CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Figma: https://www.figma.com/• Cursor: https://cursor.com/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Claude Code: https://www.anthropic.com/claude-code• GitHub Copilot: https://github.com/features/copilot• Noam Lovinsky on LinkedIn: https://www.linkedin.com/in/noaml/• The happiness and pain of product management | Noam Lovinsky (Grammarly, Facebook, YouTube, Thumbtack): https://www.lennysnewsletter.com/p/the-happiness-and-pain-of-product• Kyla Siedband on LinkedIn: https://www.linkedin.com/in/kylasiedband/• The Duolingo handbook: https://blog.duolingo.com/handbook/• Lenny's post on X about the Duolingo handbook: https://x.com/lennysan/status/1889008405584683091• The rituals of great teams | Shishir Mehrotra of Coda, YouTube, Microsoft: https://www.lennysnewsletter.com/p/the-rituals-of-great-teams-shishir• Duolingo on TikTok: https://www.tiktok.com/@duolingo• Kasparov vs. Deep Blue | The Match That Changed History: https://www.chess.com/article/view/deep-blue-kasparov-chess• Magnus Carlsen: https://en.wikipedia.org/wiki/Magnus_Carlsen• Elo rating system: https://www.chess.com/terms/elo-rating-chess• Stockfish: https://en.wikipedia.org/wiki/Stockfish_(chess)• AlphaGo on Prime Video: https://www.primevideo.com/detail/AlphaGo/0KNQHKKDAOE8OCYKQS9WSSDYN0• Statsig: https://www.statsig.com/• The State of Product in 2026: Navigating Change, Challenge, and Opportunity: https://www.atlassian.com/blog/announcements/state-of-product-2026• Erik Allebest on LinkedIn: https://www.linkedin.com/in/erikallebest/• Daniel Rensch on X: https://x.com/danielrensch• Chariot: https://en.wikipedia.org/wiki/Chariot_(company)• San Francisco 49ers: https://www.49ers.com/• Breville Barista Express: https://www.breville.com/en-us/product/bes870—Recommended books:• Snuggle Puppy!: A Little Love Song: https://www.amazon.com/Snuggle-Puppy-Little-Boynton-Board/dp/1665924985• Ogilvy on Advertising: https://www.amazon.com/Ogilvy-Advertising-David/dp/039472903X• Dark Squares: How Chess Saved My Life: https://www.amazon.com/Dark-Squares-Chess-Saved-Life/dp/1541703286—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com