POPULARITY
Categories
Australia's hot, dry climate and unique landscapes make it one of the most bushfire-prone countries in the world. As climate change drives higher temperatures and drier conditions, we can expect more frequent and more intense bushfires in the future. But once the fire is out, how do you return home safely, find support, and look after yourself? - Australia yeej yog lub teb chaws kub sov heev, muaj tej huab cua qhuav nkig thiab muaj tej luaj pua sib txawv ces thiaj ua rau yog ib lub teb chaws ntawm ntau lub uas muaj hav zoov kub hnyiab tshaj plaws hauv ntiaj teb no. Vim tej huab cua pauv hloov tau ua rau huab cua kub sov tuaj ntxiv thiab ua rau haj yam muaj tej xwm txheej qhuav nkig tshaj qub ntxiv, ces thiaj yuav ua rau yuav muaj tej xwm txheej hav zoov kub hnyiab muaj ceem tshaj qub ntxiv thiab ntau tshaj qub ntxiv rau yav pem suab. Tab sis yog thaum uas tej hav zoov kub hnyiab no tau kub dhau mus lawm ne, ho yuav ua li cas kom peb thiaj rov qab mus tsev yam tau txais kev nyab xeeb, thiab kom nrhiav tau kev pab cuam los yog kom thiaj saib xyuas tau peb tus kheej?
¿Te has rendido alguna vez intentando programar en movilidad? Te confieso que lo de programar en la tablet Android no me estaba funcionando, y la razón era sencilla: pereza y falta de un entorno coherente. En el episodio de hoy, te cuento cómo he solucionado este problema de raíz instalando Code Server en un servidor remoto.A lo largo de este audio, exploramos los desafíos de mantener múltiples entornos de desarrollo y por qué la fragmentación mata tu creatividad. Te detallo el paso a paso de mi configuración técnica: desde la creación de una imagen de Docker personalizada hasta la integración de herramientas modernas escritas en Rust (como Bat y LSD) que mejoran la experiencia en la terminal.Lo que aprenderás en este episodio: Por qué un servidor de desarrollo es superior a las instalaciones locales en tablets. Cómo configurar Docker Compose para desplegar Code Server con persistencia real. Seguridad avanzada: Uso de Traefik, Pocket ID y geobloqueo para proteger tu código. Trucos de configuración para VS Code en el navegador: Mapeo de teclas, evitar el conflicto con la tecla Escape y el uso de la fuente JetBrains Mono. Productividad máxima con los modos de Vim integrados en el flujo web. Cómo transformar Code Server en una PWA para eliminar las distracciones del navegador en Android.No se trata solo de tecnología, sino de eliminar las fricciones que nos impiden avanzar en nuestros proyectos. Si quieres saber cómo convertir cualquier dispositivo con un navegador en tu estación de trabajo principal, no te pierdas este episodio.Cronología del episodio:00:00:00 El fracaso de programar en tablet (y por qué)00:01:43 La solución definitiva: Code Server00:02:12 El problema de los entornos fragmentados00:03:53 Mi imagen personalizada de Docker para Code Server00:05:04 Herramientas imprescindibles en Rust (Bat, LSD, SD)00:06:23 Configuración de Rust y herramientas de desarrollo00:07:05 Persistencia y Docker Compose00:08:06 Seguridad: Traefik, Pocket ID y Geobloqueo00:10:03 Optimizando VS Code para el navegador00:11:13 Sincronización y persistencia de extensiones00:12:43 Estética y tipografía (Ayu Dark y JetBrains Mono)00:13:59 El poder de Vim dentro de Code Server00:15:51 Cómo usar Code Server como una PWA en Android00:17:04 Teclado físico: El accesorio obligatorio00:18:50 Conclusiones y futuro del desarrollo remotoRecuerda que puedes encontrar todas las notas, el repositorio y los enlaces mencionados en atareao.es. Si te gusta el contenido, una valoración en Spotify o Apple Podcast ayuda muchísimo a seguir difundiendo el mundo Linux y el Open Source.Más información y enlaces en las notas del episodio
Hmoob Suav noj peb caug txawv lwm cov Hmoob hauv ntiaj teb li cas? Vim li cas thiaj tsa tus ncej ntxheb ncej txhoo? Ntxawm Muas uas yog ib tug ntxhais Hmoob Suav nyob Paj tawg lag teb muaj lus tshab txhais txog lub ntsiab lus ntawm tus ncej ntxheb ncej ntxhoo qhia rau koj.
The Linux Kernel 6.19 is out, the Rust experiment is over, and it's time to talk about 7.0. Vim 9.2 is out, with a bit of a weird new feature in its changelogs, and IPFire is an intriguing, community-driven security domain block list. PearOS has a new release for those seeking an Apple-inspired Linux experience, and Linux Mint is adjusting its release schedule to better manage developer and tester schedules. River is a new project trying to do Wayland support with a modular Desktop stack, and Mesa 26.0 is out with impressive performance gains. For tips, we have cull for finding and deleting big files, a systemd program for detecting if the OS is running virtualized, preload for caching applications in ram, and new_script for a script-writing tool that *doesn't* feature an LLM. You can find the show notes at https://bit.ly/3ZCNcEc and happy Linux'ing! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
This is a recap of the top 10 posts on Hacker News on February 14, 2026. This podcast was generated by wondercraft.ai (00:30): uBlock filter list to hide all YouTube ShortsOriginal post: https://news.ycombinator.com/item?id=47016443&utm_source=wondercraft_ai(01:57): An AI agent published a hit piece on me – more things have happenedOriginal post: https://news.ycombinator.com/item?id=47009949&utm_source=wondercraft_ai(03:24): Ars Technica makes up quotes from Matplotlib maintainer; pulls storyOriginal post: https://news.ycombinator.com/item?id=47013059&utm_source=wondercraft_ai(04:51): Ooh.directory: a place to find good blogs that interest youOriginal post: https://news.ycombinator.com/item?id=47014449&utm_source=wondercraft_ai(06:18): News publishers limit Internet Archive access due to AI scraping concernsOriginal post: https://news.ycombinator.com/item?id=47017138&utm_source=wondercraft_ai(07:45): My smart sleep mask broadcasts users' brainwaves to an open MQTT brokerOriginal post: https://news.ycombinator.com/item?id=47015294&utm_source=wondercraft_ai(09:12): Vim 9.2Original post: https://news.ycombinator.com/item?id=47015330&utm_source=wondercraft_ai(10:39): Zig – io_uring and Grand Central Dispatch std.Io implementations landedOriginal post: https://news.ycombinator.com/item?id=47012717&utm_source=wondercraft_ai(12:06): Homeland Security Wants Social Media Sites to Expose Anti-ICE AccountsOriginal post: https://news.ycombinator.com/item?id=47009582&utm_source=wondercraft_ai(13:33): Platforms bend over backward to help DHS censor ICE critics, advocates sayOriginal post: https://news.ycombinator.com/item?id=47015406&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
The Linux Kernel 6.19 is out, the Rust experiment is over, and it's time to talk about 7.0. Vim 9.2 is out, with a bit of a weird new feature in its changelogs, and IPFire is an intriguing, community-driven security domain block list. PearOS has a new release for those seeking an Apple-inspired Linux experience, and Linux Mint is adjusting its release schedule to better manage developer and tester schedules. River is a new project trying to do Wayland support with a modular Desktop stack, and Mesa 26.0 is out with impressive performance gains. For tips, we have cull for finding and deleting big files, a systemd program for detecting if the OS is running virtualized, preload for caching applications in ram, and new_script for a script-writing tool that *doesn't* feature an LLM. You can find the show notes at https://bit.ly/3ZCNcEc and happy Linux'ing! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Japan tus thawj pwm tsav Sanae Takaichi cov kev muaj yeej rau cov kev xaiv tsa loj heev thiab tau los ua Japan tus tsoom fwv tshiab zaum no nws yuav ua dab tsi los pab nws lub teb chaws? Vim li cas nws thiaj yeej? Tej hluas thiab zejzog pej kum haiv ho xav li cas rau nws txoj kev ua tus coj?
Lub caij uas tsoom fwv Australia thiab ntau lub teb chaws tseem soj thiab ntsuam tus kab mob Nipah virus no tau muaj qee cov teb chaws twb tau npaj los txheeb tus kab mob no ntawm lawv tej tshav dav hlau coj los ua cov kev tiv thaib tab sis puas yuav tiv thaiv tau, tom qab uas India tau qhia meej tias muaj ob tug neeg kis tau tus mob no thiab muaj tej neeg yam tsawg li 190 tus raug cais coj los soj seb puas tau tus mob no. Vim tej kws hais tias yuav siv sij hawm yam tsawg li ntawm 2 limtiam rov sauv thiaj mam muaj tej yam ntxwv mob ces ntshe yuav tsis muaj peev xwm tiv thaiv tus mob no kom tsis txhob sib kis, rau qhov tsam no yeej tseem tsis tau muaj tshuaj vaccine siv los yog tej xub ke yuav kho tau tus mob no.
El campo Quifa podría incorporar hasta 16 millones de barriles de reservas, mientras que el bloque Cubiro liberaría aproximadamente 8 millones de barriles . Además, los bloques VIM‑1 y El Difícil en la cuenca del Magdalena aumentarían la producción a gas y condensado.See omnystudio.com/listener for privacy information.
How do we heal, grow, and change as apprentices of Jesus? Many of us have subscribed to the traditional Christian approach of “trying harder” to “believe and do what's right,” only to find ourselves stuck and discouraged. Thankfully the vision Jesus casts for transformation shows us a different path forward.Join us for this episode of Soul Talks as Bill and Kristi share how Dallas Willard's mantra, “Don't try — train,” revolutionized their approach to spiritual formation. You'll burn with a desire to become more loving and healthy and get equipped with a practical tool to help you grow in Christlikeness one area at a time.If you want to go deeper into the insights we gained from Dallas Willard, we invite you to join us on a retreat or train to become a spiritual director with Soul Shepherding. You can learn more by following the links below.Resources for this Episode:Attend a Soul Shepherding RetreatEarn a Certificate in Spiritual DirectionYour Best Life in Jesus' Easy Yoke: Rhythms of Grace to De-Stress and Live EmpoweredDonate to Support Soul Shepherding and Soul Talks
Shortwave KitschSWK REWINDS S1, E2Extra Intelligence - “Pretty Fly for a Spy Guy”Written by Brandon L. Joyner Synopsis: In the midst of a world dominated by the ominous red tide of the Third Reich, Castor Lovis emerges as an unexpected force within Nazi intelligence. Castor is from a distant realm, unbeknownst to his superiors, and his allegiance lies with a secret American agenda to dismantle the oppressive regime. His undercover mission unfolds against the backdrop of a noir-inspired landscape. The question lingers: Will these covert maneuvers alter the course of history or seal Castor's fate?Cast:Chad Estel - Castor LovisMichelle Junga-Murphy - Jonas GrafMaddie Casto - Edith BauerMichael Catangay - Toby MolinaBrandon L. Joyner - Gunther VogelDr. Clutterbuck's “Vim and Vigor Cigarettes” CommercialMichael Catangay - Pitchman Miguel Rabsatt - Man on the StreetJingle - “Vim and Vigor Cigarettes”Sung by Jeannie Joyner, John Joyner, Kristen N. Granet and the Cast of SWK Music by Pedro M. ToroLyrics by Brandon L. Joyner Song - “I'll Always Love You”Sung by Maddie CastoMusic by Pedro M. ToroLyrics by Brandon L. Joyner Song - "Until Next Time"Sung by David Joyner, Jeannie Joyner, John Joyner, Kristen N. Granet and the Cast of SWK Music by Pedro M. ToroLyrics by Brandon L. Joyner Narrated by David Joyner Extra Intelligence Theme by Pedro M. Toro Music Direction and Accompaniment by Pedro M. ToroSound Effects performed LIVE by Brooke RashProduction Team:Producer: Maddie Casto-Koebler, Kristen N. Granet, Brandon L. Joyner, Brooke RashSound Engineer: Matt CiclonSound Mixing: Matt CiclonOriginally Recorded by Amanda HenleyBusiness Manager: Kristen N. Granet Original episode art by Maddie Casto-KoeblerLogo: LinkonSpecial Thanks to:Lesa Spillers, Julia Sorenson, Tara O'Shields, Rose Newman, Josh Anderson, Kerry Bowers, Jeff & Teri Ziccardi, The Joyner Family, Cheryl Granet, and other marvelous yet anonymous donors!Connect with Us: Facebook, Instagram @swkradioshowOur show thrives with the support of our Patreon community! If you're able, we invite you to be part of the journey and help sustain the laughter for episodes to come.Support Our SWK Pod: patreon.com/SWKRadioShowAND... Hat on a Hat Creative, the mastermind team behind SWK, is now registered as a Non-Profit under the 501c classification for the arts, reporting under EIN 99-3747555, SC ID: P91719. If you are interested in helping our dream continue to be a reality we would love your support. Learn more at https://www.shortwavekitsch.com/hatonahatcreative !
Vim li cas tej neeg ntawm lub nroog Minneapolis, tej nom tswv xeev Minnesota thiaj tawm tsam tsis pom zoo nrog tsoom fwv Trump cov kev siv cov Immigration and Customs Enforcement agent (ICE) mus rau lawv lub nroog thiab lawv tej zejzog. Txheeb ntxiv seb tsoom fwv Trump tsab cai immigration policy ua rau muaj tej xwm txheej dab tsi ntawm teb chaws Meskas.
De retour à cinq dans l'épisode, les cast codeurs démarrent cette année avec un gros épisode pleins de news et d'articles de fond. IA bien sûr, son impact sur les pratiques, Mockito qui tourne un page, du CSS (et oui), sur le (non) mapping d'APIs REST en MCP et d'une palanquée d'outils pour vous. Enregistré le 9 janvier 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-335.mp3 ou en vidéo sur YouTube. News Langages 2026 sera-t'elle l'année de Java dans le terminal ? (j'ai ouïe dire que ça se pourrait bien…) https://xam.dk/blog/lets-make-2026-the-year-of-java-in-the-terminal/ 2026: Année de Java dans le terminal, pour rattraper son retard sur Python, Rust, Go et Node.js. Java est sous-estimé pour les applications CLI et les TUIs (interfaces utilisateur terminales) malgré ses capacités. Les anciennes excuses (démarrage lent, outillage lourd, verbosité, distribution complexe) sont obsolètes grâce aux avancées récentes : GraalVM Native Image pour un démarrage en millisecondes. JBang pour l'exécution simplifiée de scripts Java (fichiers uniques, dépendances) et de JARs. JReleaser pour l'automatisation de la distribution multi-plateforme (Homebrew, SDKMAN, Docker, images natives). Project Loom pour la concurrence facile avec les threads virtuels. PicoCLI pour la gestion des arguments. Le potentiel va au-delà des scripts : création de TUIs complètes et esthétiques (ex: dashboards, gestionnaires de fichiers, assistants IA). Excuses caduques : démarrage rapide (GraalVM), légèreté (JBang), distribution simple (JReleaser), concurrence (Loom). Potentiel : créer des applications TUI riches et esthétiques. Sortie de Ruby 4.0.0 https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/ Ruby Box (expérimental) : Une nouvelle fonctionnalité permettant d'isoler les définitions (classes, modules, monkey patches) dans des boîtes séparées pour éviter les conflits globaux. ZJIT : Un nouveau compilateur JIT de nouvelle génération développé en Rust, visant à surpasser YJIT à terme (actuellement en phase expérimentale). Améliorations de Ractor : Introduction de Ractor::Port pour une meilleure communication entre Ractors et optimisation des structures internes pour réduire les contentions de verrou global. Changements syntaxiques : Les opérateurs logiques (||, &&, and, or) en début de ligne permettent désormais de continuer la ligne précédente, facilitant le style "fluent". Classes Core : Set et Pathname deviennent des classes intégrées (Core) au lieu d'être dans la bibliothèque standard. Diagnostics améliorés : Les erreurs d'arguments (ArgumentError) affichent désormais des extraits de code pour l'appelant ET la définition de la méthode. Performances : Optimisation de Class#new, accès plus rapide aux variables d'instance et améliorations significatives du ramasse-miettes (GC). Nettoyage : Suppression de comportements obsolètes (comme la création de processus via IO.open avec |) et mise à jour vers Unicode 17.0. Librairies Introduction pour créer une appli multi-tenant avec Quarkus et http://nip.io|nip.io https://www.the-main-thread.com/p/quarkus-multi-tenant-api-nipio-tutorial Construction d'une API REST multi-tenant en Quarkus avec isolation par sous-domaine Utilisation de http://nip.io|nip.io pour la résolution DNS automatique sans configuration locale Extraction du tenant depuis l'en-tête HTTP Host via un filtre JAX-RS Contexte tenant géré avec CDI en scope Request pour l'isolation des données Service applicatif gérant des données spécifiques par tenant avec Map concurrent Interface web HTML/JS pour visualiser et ajouter des données par tenant Configuration CORS nécessaire pour le développement local Pattern acme.127-0-0-1.nip.io résolu automatiquement vers localhost Code complet disponible sur GitHub avec exemples curl et tests navigateur Base idéale pour prototypage SaaS, tests multi-tenants Hibernate 7.2 avec quelques améliorations intéressantes https://docs.hibernate.org/orm/7.2/whats-new/%7Bhtml-meta-canonical-link%7D read only replica (experimental), crée deux session factories et swap au niveau jdbc si le driver le supporte et custom sinon. On ouvre une session en read only child statelesssession (partage le contexte transactionnel) hibernate vector module ajouter binary, float16 and sparse vectors Le SchemaManager peut resynchroniser les séquences par rapport aux données des tables Regexp dans HQL avec like Nouvelle version de Hibernate with Panache pour Quarkus https://quarkus.io/blog/hibernate-panache-next/ Nouvelle extension expérimentale qui unifie Hibernate ORM with Panache et Hibernate Reactive with Panache Les entités peuvent désormais fonctionner en mode bloquant ou réactif sans changer de type de base Support des sessions sans état (StatelessSession) en plus des entités gérées traditionnelles Intégration de Jakarta Data pour des requêtes type-safe vérifiées à la compilation Les opérations sont définies dans des repositories imbriqués plutôt que des méthodes statiques Possibilité de définir plusieurs repositories pour différents modes d'opération sur une même entité Accès aux différents modes (bloquant/réactif, géré/sans état) via des méthodes de supertype Support des annotations @Find et @HQL pour générer des requêtes type-safe Accès au repository via injection ou via le métamodèle généré Extension disponible dans la branche main, feedback demandé sur Zulip ou GitHub Spring Shell 4.0.0 GA publié - https://spring.io/blog/2025/12/30/spring-shell-4-0-0-ga-released Sortie de la version finale de Spring Shell 4.0.0 disponible sur Maven Central Compatible avec les dernières versions de Spring Framework et Spring Boot Modèle de commandes revu pour simplifier la création d'applications CLI interactives Intégration de jSpecify pour améliorer la sécurité contre les NullPointerException Architecture plus modulaire permettant meilleure personnalisation et extension Documentation et exemples entièrement mis à jour pour faciliter la prise en main Guide de migration vers la v4 disponible sur le wiki du projet Corrections de bugs pour améliorer la stabilité et la fiabilité Permet de créer des applications Java autonomes exécutables avec java -jar ou GraalVM native Approche opinionnée du développement CLI tout en restant flexible pour les besoins spécifiques Une nouvelle version de la librairie qui implémenter des gatherers supplémentaires à ceux du JDK https://github.com/tginsberg/gatherers4j/releases/tag/v0.13.0 gatherers4j v0.13.0. Nouveaux gatherers : uniquelyOccurringBy(), moving/runningMedian(), moving/runningMax/Min(). Changement : les gatherers "moving" incluent désormais par défaut les valeurs partielles (utiliser excludePartialValues() pour désactiver). LangChain4j 1.10.0 https://github.com/langchain4j/langchain4j/releases/tag/1.10.0 Introduction d'un catalogue de modèles pour Anthropic, Gemini, OpenAI et Mistral. Ajout de capacités d'observabilité et de monitoring pour les agents. Support des sorties structurées, des outils avancés et de l'analyse de PDF via URL pour Anthropic. Support des services de transcription pour OpenAI. Possibilité de passer des paramètres de configuration de chat en argument des méthodes. Nouveau garde-fou de modération pour les messages entrants. Support du contenu de raisonnement pour les modèles. Introduction de la recherche hybride. Améliorations du client MCP. Départ du lead de mockito après 10 ans https://github.com/mockito/mockito/issues/3777 Tim van der Lippe, mainteneur majeur de Mockito, annonce son départ pour mars 2026, marquant une décennie de contribution au projet. L'une des raisons principales est l'épuisement lié aux changements récents dans la JVM (JVM 22+) concernant les agents, imposant des contraintes techniques lourdes sans alternative simple proposée par les mainteneurs du JDK. Il pointe du doigt le manque de soutien et la pression exercée sur les bénévoles de l'open source lors de ces transitions technologiques majeures. La complexité croissante pour supporter Kotlin, qui utilise la JVM de manière spécifique, rend la base de code de Mockito plus difficile à maintenir et moins agréable à faire évoluer selon lui. Il exprime une perte de plaisir et préfère désormais consacrer son temps libre à d'autres projets comme Servo, un moteur web écrit en Rust. Une période de transition est prévue jusqu'en mars pour assurer la passation de la maintenance à de nouveaux contributeurs. Infrastructure Le premier intérêt de Kubernetes n'est pas le scaling - https://mcorbin.fr/posts/2025-12-29-kubernetes-scale/ Avant Kubernetes, gérer des applications en production nécessitait de multiples outils complexes (Ansible, Puppet, Chef) avec beaucoup de configuration manuelle Le load balancing se faisait avec HAProxy et Keepalived en actif/passif, nécessitant des mises à jour manuelles de configuration à chaque changement d'instance Le service discovery et les rollouts étaient orchestrés manuellement, instance par instance, sans automatisation de la réconciliation Chaque stack (Java, Python, Ruby) avait sa propre méthode de déploiement, sans standardisation (rpm, deb, tar.gz, jar) La gestion des ressources était manuelle avec souvent une application par machine, créant du gaspillage et complexifiant la maintenance Kubernetes standardise tout en quelques ressources YAML (Deployment, Service, Ingress, ConfigMap, Secret) avec un format déclaratif simple Toutes les fonctionnalités critiques sont intégrées : service discovery, load balancing, scaling, stockage, firewalling, logging, tolérance aux pannes La complexité des centaines de scripts shell et playbooks Ansible maintenus avant était supérieure à celle de Kubernetes Kubernetes devient pertinent dès qu'on commence à reconstruire manuellement ces fonctionnalités, ce qui arrive très rapidement La technologie est flexible et peut gérer aussi bien des applications modernes que des monolithes legacy avec des contraintes spécifiques Mole https://github.com/tw93/Mole Un outil en ligne de commande (CLI) tout-en-un pour nettoyer et optimiser macOS. Combine les fonctionnalités de logiciels populaires comme CleanMyMac, AppCleaner, DaisyDisk et iStat Menus. Analyse et supprime en profondeur les caches, les fichiers logs et les résidus de navigateurs. Désinstallateur intelligent qui retire proprement les applications et leurs fichiers cachés (Launch Agents, préférences). Analyseur d'espace disque interactif pour visualiser l'occupation des fichiers et gérer les documents volumineux. Tableau de bord temps réel (mo status) pour surveiller le CPU, le GPU, la mémoire et le réseau. Fonction de purge spécifique pour les développeurs permettant de supprimer les artefacts de build (node_modules, target, etc.). Intégration possible avec Raycast ou Alfred pour un lancement rapide des commandes. Installation simple via Homebrew ou un script curl. Des images Docker sécurisées pour chaque développeur https://www.docker.com/blog/docker-hardened-images-for-every-developer/ Docker rend ses "Hardened Images" (DHI) gratuites et open source (licence Apache 2.0) pour tous les développeurs. Ces images sont conçues pour être minimales, prêtes pour la production et sécurisées dès le départ afin de lutter contre l'explosion des attaques sur la chaîne logistique logicielle. Elles s'appuient sur des bases familières comme Alpine et Debian, garantissant une compatibilité élevée et une migration facile. Chaque image inclut un SBOM (Software Bill of Materials) complet et vérifiable, ainsi qu'une provenance SLSA de niveau 3 pour une transparence totale. L'utilisation de ces images permet de réduire considérablement le nombre de vulnérabilités (CVE) et la taille des images (jusqu'à 95 % plus petites). Docker étend cette approche sécurisée aux graphiques Helm et aux serveurs MCP (Mongo, Grafana, GitHub, etc.). Des offres commerciales (DHI Enterprise) restent disponibles pour des besoins spécifiques : correctifs critiques sous 7 jours, support FIPS/FedRAMP ou support à cycle de vie étendu (ELS). Un assistant IA expérimental de Docker peut analyser les conteneurs existants pour recommander l'adoption des versions sécurisées correspondantes. L'initiative est soutenue par des partenaires majeurs tels que Google, MongoDB, Snyk et la CNCF. Web La maçonnerie ("masonry") arrive dans la spécification des CSS et commence à être implémentée par les navigateurs https://webkit.org/blog/17660/introducing-css-grid-lanes/ Permet de mettre en colonne des éléments HTML les uns à la suite des autres. D'abord sur la première ligne, et quand la première ligne est remplie, le prochain élément se trouvera dans la colonne où il pourra être le plus haut possible, et ainsi de suite. après la plomberie du middleware, la maçonnerie du front :laughing: Data et Intelligence Artificielle On ne devrait pas faire un mapping 1:1 entre API REST et MCP https://nordicapis.com/why-mcp-shouldnt-wrap-an-api-one-to-one/ Problématique : Envelopper une API telle quelle dans le protocole MCP (Model Context Protocol) est un anti-pattern. Objectif du MCP : Conçu pour les agents d'IA, il doit servir d'interface d'intention, non de miroir d'API. Les agents comprennent les tâches, pas la logique complexe des API (authentification, pagination, orchestration). Conséquences du mappage un-à-un : Confusion des agents, erreurs, hallucinations. Difficulté à gérer les orchestrations complexes (plusieurs appels pour une seule action). Exposition des faiblesses de l'API (schéma lourd, endpoints obsolètes). Maintenance accrue lors des changements d'API. Meilleure approche : Construire des outils MCP comme des SDK pour agents, encapsulant la logique nécessaire pour accomplir une tâche spécifique. Pratiques recommandées : Concevoir autour des intentions/actions utilisateur (ex. : "créer un projet", "résumer un document"). Regrouper les appels en workflows ou actions uniques. Utiliser un langage naturel pour les définitions et les noms. Limiter la surface d'exposition de l'API pour la sécurité et la clarté. Appliquer des schémas d'entrée/sortie stricts pour guider l'agent et réduire l'ambiguïté. Des agents en production avec AWS - https://blog.ippon.fr/2025/12/22/des-agents-en-production-avec-aws/ AWS re:Invent 2025 a massivement mis en avant l'IA générative et les agents IA Un agent IA combine un LLM, une boucle d'appel et des outils invocables Strands Agents SDK facilite le prototypage avec boucles ReAct intégrées et gestion de la mémoire Managed MLflow permet de tracer les expérimentations et définir des métriques de performance Nova Forge optimise les modèles par réentraînement sur données spécifiques pour réduire coûts et latence Bedrock Agent Core industrialise le déploiement avec runtime serverless et auto-scaling Agent Core propose neuf piliers dont observabilité, authentification, code interpreter et browser managé Le protocole MCP d'Anthropic standardise la fourniture d'outils aux agents SageMaker AI et Bedrock centralisent l'accès aux modèles closed source et open source via API unique AWS mise sur l'évolution des chatbots vers des systèmes agentiques optimisés avec modèles plus frugaux Debezium 3.4 amène plusieurs améliorations intéressantes https://debezium.io/blog/2025/12/16/debezium-3-4-final-released/ Correction du problème de calcul du low watermark Oracle qui causait des pertes de performance Correction de l'émission des événements heartbeat dans le connecteur Oracle avec les requêtes CTE Amélioration des logs pour comprendre les transactions actives dans le connecteur Oracle Memory guards pour protéger contre les schémas de base de données de grande taille Support de la transformation des coordonnées géométriques pour une meilleure gestion des données spatiales Extension Quarkus DevServices permettant de démarrer automatiquement une base de données et Debezium en dev Intégration OpenLineage pour tracer la lignée des données et suivre leur flux à travers les pipelines Compatibilité testée avec Kafka Connect 4.1 et Kafka brokers 4.1 Infinispan 16.0.4 et .5 https://infinispan.org/blog/2025/12/17/infinispan-16-0-4 Spring Boot 4 et Spring 7 supportés Evolution dans les metriques Deux bugs de serialisation Construire un agent de recherche en Java avec l'API Interactions https://glaforge.dev/posts/2026/01/03/building-a-research-assistant-with-the-interactions-api-in-java/ Assistant de recherche IA Java (API Interactions Gemini), test du SDK implémenté par Guillaume. Workflow en 4 phases : Planification : Gemini Flash + Google Search. Recherche : Modèle "Deep Research" (tâche de fond). Synthèse : Gemini Pro (rapport exécutif). Infographie : Nano Banana Pro (à partir de la synthèse). API Interactions : gestion d'état serveur, tâches en arrière-plan, réponses multimodales (images). Appréciation : gestion d'état de l'API (vs LLM sans état). Validation : efficacité du SDK Java pour cas complexes. Stephan Janssen (le papa de Devoxx) a créé un serveur MCP (Model Context Protocol) basé sur LSP (Language Server Protocol) pour que les assistants de code analysent le code en le comprenant vraiment plutôt qu'en faisant des grep https://github.com/stephanj/LSP4J-MCP Le problème identifié : Les assistants IA utilisent souvent la recherche textuelle (type grep) pour naviguer dans le code, ce qui manque de contexte sémantique, génère du bruit (faux positifs) et consomme énormément de tokens inutilement. La solution LSP4J-MCP : Une approche "standalone" (autonome) qui encapsule le serveur de langage Eclipse (JDTLS) via le protocole MCP (Model Context Protocol). Avantage principal : Offre une compréhension sémantique profonde du code Java (types, hiérarchies, références) sans nécessiter l'ouverture d'un IDE lourd comme IntelliJ. Comparaison des méthodes : AST : Trop léger (pas de compréhension inter-fichiers). IntelliJ MCP : Puissant mais exige que l'IDE soit ouvert (gourmand en ressources). LSP4J-MCP : Le meilleur des deux mondes pour les workflows en terminal, à distance (SSH) ou CI/CD. Fonctionnalités clés : Expose 5 outils pour l'IA (find_symbols, find_references, find_definition, document_symbols, find_interfaces_with_method). Résultats : Une réduction de 100x des tokens utilisés pour la navigation et une précision accrue (distinction des surcharges, des scopes, etc.). Disponibilité : Le projet est open source et disponible sur GitHub pour intégration immédiate (ex: avec Claude Code, Gemini CLI, etc). A noter l'ajout dans claude code 2.0.74 d'un tool pour supporter LSP ( https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2074 ) Awesome (GitHub) Copilot https://github.com/github/awesome-copilot Une collection communautaire d'instructions, de prompts et de configurations pour optimiser l'utilisation de GitHub Copilot. Propose des "Agents" spécialisés qui s'intègrent aux serveurs MCP pour améliorer les flux de travail spécifiques. Inclut des prompts ciblés pour la génération de code, la documentation et la résolution de problèmes complexes. Fournit des instructions détaillées sur les standards de codage et les meilleures pratiques applicables à divers frameworks. Propose des "Skills" (compétences) sous forme de dossiers contenant des ressources pour des tâches techniques spécialisées. (les skills sont dispo dans copilot depuis un mois : https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/ ) Permet une installation facile via un serveur MCP dédié, compatible avec VS Code et Visual Studio. Encourage la contribution communautaire pour enrichir les bibliothèques de prompts et d'agents. Aide à augmenter la productivité en offrant des solutions pré-configurées pour de nombreux langages et domaines. Garanti par une licence MIT et maintenu activement par des contributeurs du monde entier. IA et productivité : bilan de l'année 2025 (Laura Tacho - DX)) https://newsletter.getdx.com/p/ai-and-productivity-year-in-review?aid=recNfypKAanQrKszT En 2025, l'ingénierie assistée par l'IA est devenue la norme : environ 90 % des développeurs utilisent des outils d'IA mensuellement, et plus de 40 % quotidiennement. Les chercheurs (Microsoft, Google, GitHub) soulignent que le nombre de lignes de code (LOC) reste un mauvais indicateur d'impact, car l'IA génère beaucoup de code sans forcément garantir une valeur métier supérieure. Si l'IA améliore l'efficacité individuelle, elle pourrait nuire à la collaboration à long terme, car les développeurs passent plus de temps à "parler" à l'IA qu'à leurs collègues. L'identité du développeur évolue : il passe de "producteur de code" à un rôle de "metteur en scène" qui délègue, valide et exerce son jugement stratégique. L'IA pourrait accélérer la montée en compétences des développeurs juniors en les forçant à gérer des projets et à déléguer plus tôt, agissant comme un "accélérateur" plutôt que de les rendre obsolètes. L'accent est mis sur la créativité plutôt que sur la simple automatisation, afin de réimaginer la manière de travailler et d'obtenir des résultats plus impactants. Le succès en 2026 dépendra de la capacité des entreprises à cibler les goulots d'étranglement réels (dette technique, documentation, conformité) plutôt que de tester simplement chaque nouveau modèle d'IA. La newsletter avertit que les titres de presse simplifient souvent à l'excès les recherches sur l'IA, masquant parfois les nuances cruciales des études réelles. Un développeur décrit dans un article sur Twitter son utilisation avancée de Claude Code pour le développement, avec des sous-agents, des slash-commands, comment optimiser le contexte, etc. https://x.com/AureaLibe/status/2008958120878330329?s=20 Outillage IntelliJ IDEA, thread dumps et project Loom (virtual threads) - https://blog.jetbrains.com/idea/2025/12/thread-dumps-and-project-loom-virtual-threads/ Les virtual threads Java améliorent l'utilisation du matériel pour les opérations I/O parallèles avec peu de changements de code Un serveur peut maintenant gérer des millions de threads au lieu de quelques centaines Les outils existants peinent à afficher et analyser des millions de threads simultanément Le débogage asynchrone est complexe car le scheduler et le worker s'exécutent dans des threads différents Les thread dumps restent essentiels pour diagnostiquer deadlocks, UI bloquées et fuites de threads Netflix a découvert un deadlock lié aux virtual threads en analysant un heap dump, bug corrigé dans Java 25. Mais c'était de la haute voltige IntelliJ IDEA supporte nativement les virtual threads dès leur sortie avec affichage des locks acquis IntelliJ IDEA peut ouvrir des thread dumps générés par d'autres outils comme jcmd Le support s'étend aussi aux coroutines Kotlin en plus des virtual threads Quelques infos sur IntelliJ IDEA 2025.3 https://blog.jetbrains.com/idea/2025/12/intellij-idea-2025-3/ Distribution unifiée regroupant davantage de fonctionnalités gratuites Amélioration de la complétion des commandes dans l'IDE Nouvelles fonctionnalités pour le débogueur Spring Thème Islands devient le thème par défaut Support complet de Spring Boot 4 et Spring Framework 7 Compatibilité avec Java 25 Prise en charge de Spring Data JDBC et Vitest 4 Support natif de Junie et Claude Agent pour l'IA Quota d'IA transparent et option Bring Your Own Key à venir Corrections de stabilité, performance et expérience utilisateur Plein de petits outils en ligne pour le développeur https://blgardner.github.io/prism.tools/ génération de mot de passe, de gradient CSS, de QR code encodage décodage de Base64, JWT formattage de JSON, etc. resumectl - Votre CV en tant que code https://juhnny5.github.io/resumectl/ Un outil en ligne de commande (CLI) écrit en Go pour générer un CV à partir d'un fichier YAML. Permet l'exportation vers plusieurs formats : PDF, HTML, ou un affichage direct dans le terminal. Propose 5 thèmes intégrés (Modern, Classic, Minimal, Elegant, Tech) personnalisables avec des couleurs spécifiques. Fonctionnalité d'initialisation (resumectl init) permettant d'importer automatiquement des données depuis LinkedIn et GitHub (projets les plus étoilés). Supporte l'ajout de photos avec des options de filtre noir et blanc ou de forme (rond/carré). Inclut un mode "serveur" (resumectl serve) pour prévisualiser les modifications en temps réel via un navigateur local. Fonctionne comme un binaire unique sans dépendances externes complexes pour les modèles. mactop - Un moniteur "top" pour Apple Silicon https://github.com/metaspartan/mactop Un outil de surveillance en ligne de commande (TUI) conçu spécifiquement pour les puces Apple Silicon (M1, M2, M3, M4, M5). Permet de suivre en temps réel l'utilisation du CPU (E-cores et P-cores), du GPU et de l'ANE (Neural Engine). Affiche la consommation électrique (wattage) du système, du CPU, du GPU et de la DRAM. Fournit des données sur les températures du SoC, les fréquences du GPU et l'état thermique global. Surveille l'utilisation de la mémoire vive, de la swap, ainsi que l'activité réseau et disque (E/S). Propose 10 mises en page (layouts) différentes et plusieurs thèmes de couleurs personnalisables. Ne nécessite pas l'utilisation de sudo car il s'appuie sur les API natives d'Apple (SMC, IOReport, IOKit). Inclut une liste de processus détaillée (similaire à htop) avec la possibilité de tuer des processus directement depuis l'interface. Offre un mode "headless" pour exporter les métriques au format JSON et un serveur optionnel pour Prometheus. Développé en Go avec des composants en CGO et Objective-C. Adieu direnv, Bonjour misehttps://codeka.io/2025/12/19/adieu-direnv-bonjour-mise/ L'auteur remplace ses outils habituels (direnv, asdf, task, just) par un seul outil polyvalent écrit en Rust : mise. mise propose trois fonctions principales : gestionnaire de paquets (langages et outils), gestionnaire de variables d'environnement et exécuteur de tâches. Contrairement à direnv, il permet de gérer des alias et utilise un fichier de configuration structuré (mise.toml) plutôt que du scripting shell. La configuration est hiérarchique, permettant de surcharger les paramètres selon les répertoires, avec un système de "trust" pour la sécurité. Une "killer-feature" soulignée est la gestion des secrets : mise s'intègre avec age pour chiffrer des secrets (via clés SSH) directement dans le fichier de configuration. L'outil supporte une vaste liste de langages et d'outils via un registre interne et des plugins (compatibilité avec l'écosystème asdf). Il simplifie le workflow de développement en regroupant l'installation des outils et l'automatisation des tâches au sein d'un même fichier. L'auteur conclut sur la puissance, la flexibilité et les excellentes performances de l'outil après quelques heures de test. Claude Code v2.1.0 https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#210 Rechargement à chaud des "skills" : Les modifications apportées aux compétences dans ~/.claude/skills sont désormais appliquées instantanément sans redémarrer la session. Sous-agents et forks : Support de l'exécution de compétences et de commandes slash dans un contexte de sous-agent forké via context: fork. Réglages linguistiques : Ajout d'un paramètre language pour configurer la langue de réponse par défaut (ex: language: "french"). Améliorations du terminal : Shift+Enter fonctionne désormais nativement dans plusieurs terminaux (iTerm2, WezTerm, Ghostty, Kitty) sans configuration manuelle. Sécurité et correction de bugs : Correction d'une faille où des données sensibles (clés API, tokens OAuth) pouvaient apparaître dans les logs de débogage. Nouvelles commandes slash : Ajout de /teleport et /remote-env pour les abonnés claude.ai afin de gérer des sessions distantes. Mode Plan : Le raccourci /plan permet d'activer le mode plan directement depuis le prompt, et la demande de permission à l'entrée de ce mode a été supprimée. Vim et navigation : Ajout de nombreux mouvements Vim (text objects, répétitions de mouvements f/F/t/T, indentations, etc.). Performance : Optimisation du temps de démarrage et du rendu terminal pour les caractères Unicode/Emoji. Gestion du gitignore : Support du réglage respectGitignore dans settings.json pour contrôler le comportement du sélecteur de fichiers @-mention. Méthodologies 200 déploiements en production par jour, même le vendredi : retours d'expérience https://mcorbin.fr/posts/2025-03-21-deploy-200/ Le déploiement fréquent, y compris le vendredi, est un indicateur de maturité technique et augmente la productivité globale. L'excellence technique est un atout stratégique indispensable pour livrer rapidement des produits de qualité. Une architecture pragmatique orientée services (SOA) facilite les déploiements indépendants et réduit la charge cognitive. L'isolation des services est cruciale : un développeur doit pouvoir tester son service localement sans dépendre de toute l'infrastructure. L'automatisation via Kubernetes et l'approche GitOps avec ArgoCD permettent des déploiements continus et sécurisés. Les feature flags et un système de permissions solide permettent de découpler le déploiement technique de l'activation fonctionnelle pour les utilisateurs. L'autonomie des développeurs est renforcée par des outils en self-service (CLI maison) pour gérer l'infrastructure et diagnostiquer les incidents sans goulot d'étranglement. Une culture d'observabilité intégrée dès la conception permet de détecter et de réagir rapidement aux anomalies en production. Accepter l'échec comme inévitable permet de concevoir des systèmes plus résilients capables de se rétablir automatiquement. "Vibe Coding" vs "Prompt Engineering" : l'IA et le futur du développement logiciel https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/ L'IA est passée du statut d'expérimentation à celui d'infrastructure essentielle pour le développement de logiciels en 2025. L'IA ne remplace pas les ingénieurs, mais agit comme un amplificateur de leurs compétences, de leur jugement et de la qualité de leur réflexion. Distinction entre le "Vibe Coding" (rapide, intuitif, idéal pour les prototypes) et le "Prompt Engineering" (délibéré, contraint, nécessaire pour les systèmes maintenables). L'importance cruciale du contexte ("Context Engineering") : l'IA devient réellement puissante lorsqu'elle est connectée aux systèmes réels (GitHub, Jira, etc.) via des protocoles comme le MCP. Utilisation d'agents spécialisés (écriture de RFC, revue de code, architecture) plutôt que de modèles génériques pour obtenir de meilleurs résultats. Émergence de l'ingénieur "Technical Product Manager" capable d'abattre seul le travail d'une petite équipe grâce à l'IA, à condition de maîtriser les fondamentaux techniques. Le risque majeur : l'IA permet d'aller très vite dans la mauvaise direction si le jugement humain et l'expérience font défaut. Le niveau d'exigence global augmente : les bases techniques solides deviennent plus importantes que jamais pour éviter l'accumulation de dette technique rapide. Une revue de code en solo (Kent Beck) ! https://tidyfirst.substack.com/p/party-of-one-for-code-review?r=64ov3&utm_campaign=post&utm_medium=web&triedRedirect=true La revue de code traditionnelle, héritée des inspections formelles d'IBM, s'essouffle car elle est devenue trop lente et asynchrone par rapport au rythme du développement moderne. Avec l'arrivée de l'IA ("le génie"), la vitesse de production du code dépasse la capacité de relecture humaine, créant un goulot d'étranglement majeur. La revue de code doit évoluer vers deux nouveaux objectifs prioritaires : un "sanity check" pour vérifier que l'IA a bien fait ce qu'on lui demandait, et le contrôle de la dérive structurelle de la base de code. Maintenir une structure saine est crucial non seulement pour les futurs développeurs humains, mais aussi pour que l'IA puisse continuer à comprendre et modifier le code efficacement sans perdre le contexte. Kent Beck expérimente des outils automatisés (comme CodeRabbit) pour obtenir des résumés et des schémas d'architecture afin de garder une conscience globale des changements rapides. Même si les outils automatisés sont utiles, le "Pair Programming" reste irremplaçable pour la richesse des échanges et la pression sociale bénéfique qu'il impose à la réflexion. La revue de code solo n'est pas une fin en soi, mais une adaptation nécessaire lorsque l'on travaille seul avec des outils de génération de code augmentés. Loi, société et organisation Lego lance les Lego Smart Play, avec des Brique, des Smart Tags et des Smart Figurines pour faire de nouvelles constructions interactives avec des Legos https://www.lego.com/fr-fr/smart-play LEGO SMART Play : technologie réactive au jeu des enfants. Trois éléments clés : SMART Brique : Brique LEGO 2x4 "cerveau". Accéléromètre, lumières réactives, détecteur de couleurs, synthétiseur sonore. Réagit aux mouvements (tenir, tourner, taper). SMART Tags : Petites pièces intelligentes. Indiquent à la SMART Brique son rôle (ex: hélicoptère, voiture) et les sons à produire. Activent sons, mini-jeux, missions secrètes. SMART Minifigurines : Activées près d'une SMART Brique. Révèlent des personnalités uniques (sons, humeurs, réactions) via la SMART Brique. Encouragent l'imagination. Fonctionnement : SMART Brique détecte SMART Tags et SMART Minifigurines. Réagit aux mouvements avec lumières et sons dynamiques. Compatibilité : S'assemble avec les briques LEGO classiques. Objectif : Créer des expériences de jeu interactives, uniques et illimitées. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 5 février 2026 : Web Days Convention - Aix-en-Provence (France) 12 février 2026 : Strasbourg Craft #1 - Strasbourg (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 9-10 avril 2026 : AndroidMakers by droidcon - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 24-25 avril 2026 : Faiseuses du Web 5 - Dinan (France) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
stopGOstop » sound collage – field recording – sound art – john wanzel
Vim, Vigor and Vitality (or Arrangements Are in Hand or is it Self-Hypnosis) drifts between assertion and reassurance. A slow pulse moves underneath the piece, joined by the low, sustained presence of a cello. Voices surface in fragments, pause, and … Continue reading →
Есть предположение, что злоупотребление LLM в общем и вайбкодинг в частности отупляет программистов. С другой стороны, этот наброс похож на квохтание Vim-еров на IDE-шников. Где же правда?Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Музыка из выпуска: - https://artists.landr.com/056870627229- https://t.me/angry_programmer_screamsВесь плейлист курса "Kubernetes для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3SrrmOzzdBBsdeQ0YVR3Fc7Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:06:15 Из-за чего тупеют люди?00:09:00 Если LLM не подошел, проблема в тебе00:15:35 Плохо ли генерить тесты LLM?00:20:00 Терминальный вайбкодинг00:29:00 Поиск API через LLM 00:34:30 Проектирует человек, а кодит LLM00:42:40 Катастрофа мотивации00:46:15 Эффект циганского гипноза00:51:20 Тупеем ли от поиска через LLM?01:00:00 LLM ловит нас на крючекСсылки:- https://www.youtube.com/watch?v=COovfRQ9hRM : Наше будущее - https://www.linkedin.com/posts/nityan_we-all-know-vibe-coding-has-technical-debt-activity-7339687364216193025-nY2E : Исследование отупения от ИИ - https://codeua.com/ai-coding-tools-can-reduce-productivity-study-results/ : AI Coding Tools Can Reduce Productivity: Study ResultsВидео: https://youtube.com/live/HU7m31-NZmM Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/
Esta semana en Río de la Vida presentamos un programa histórico: un gran debate nacional con los representantes de cuatro de los intensivos más influyentes de España.Participan:
This show has been flagged as Clean by the host. Setting up Linux Mint with Custom LVM and Luks Linux Mint with Custom LVM on LUKS Overview The current Linux Mint installer doesn't support custom partitions when setting up a new machine with LUKS encryption using LVM. I prefer having a separate partition for my home directory and a backup partition for Timeshift, so that reinstalling or fixing issues won't overwrite my home directory. I found several approaches to achieve this. One method involves setting up partitions first and then using the installer to select them, but this requires extensive post-installation configuration to get boot working with the encrypted drive. I discovered this blog which explains how to repartition your drive after installation. Combined with my guide on setting up hibernation, I created this documentation to help remember how to install a fresh copy of Linux Mint with LVM and LUKS. Tested on: Linux Mint 22 Cinnamon Partition Layout For this guide, I'm working with a 1TB drive that will be split into the following logical volumes: Root - 100GB (system files and applications) Swap - 32GB (for hibernation support) Home - 700GB (user files and documents) Backup - 100GB (Timeshift snapshots) Unallocated - ~68GB (reserved for future expansion) This setup ensures that system snapshots and user data remain separate, making system recovery much easier. Installation Guide Step 1: Initial Linux Mint Installation Start the Linux Mint installation process as normal: Boot from your Linux Mint installation media Follow the installation wizard (language, keyboard layout, etc.) When you reach the Installation type screen: Select "Erase disk and install Linux Mint" Click "Advanced features" Enable both options: ✓ Use LVM with the new Linux Mint installation ✓ Encrypt the new Linux Mint installation for security Click Continue Enter a strong encryption password when prompted Complete the rest of the installation (timezone, user account, etc.) When installation finishes, do NOT click "Restart Now" - we'll repartition first Important: Do NOT reboot after installation completes. We need to repartition before the first boot. Step 2: Access Root Terminal After installation finishes, open a terminal and switch to root: sudo -i This gives you administrative privileges needed for disk operations. Step 3: Check Current Disk Layout View your current partition structure: lsblk -f This displays your filesystem layout. You should see your encrypted volume group (typically vgmint) with a large root partition consuming most of the space. Step 4: Resize Root Partition Shrink the root partition from its default size (nearly full disk) to 100GB: lvresize -L 100G --resizefs vgmint/root What this does: -L 100G sets the logical volume size to exactly 100GB --resizefs automatically resizes the filesystem to match This frees up ~900GB for our other partitions Step 5: Resize Swap Partition The default swap is usually small (a few GB). We need to increase it to 32GB for hibernation: lvresize --verbose -L +32G /dev/mapper/vgmint-swap_1 What this does: -L +32G adds 32GB to the current swap size --verbose shows detailed progress information This ensures enough swap space for RAM contents during hibernation Note: For hibernation to work, swap should be at least equal to your RAM size. Adjust accordingly. Step 6: Create Home Partition Create a new logical volume for your home directory: lvcreate -L 700G vgmint -n home What this does: -L 700G creates a 700GB logical volume vgmint is the volume group name -n home names the new volume "home" Step 7: Create Backup Partition Create a logical volume for Timeshift backups: lvcreate -L 100G vgmint -n backup What this does: Creates a dedicated 100GB space for system snapshots Keeps backups separate from user data Prevents backups from filling up your home partition Step 8: Format New Partitions Format both new partitions with the ext4 filesystem: mkfs.ext4 /dev/vgmint/backup mkfs.ext4 /dev/vgmint/home What this does: Creates ext4 filesystems on both logical volumes ext4 is the standard Linux filesystem with good performance and reliability Step 9: Mount Partitions Create mount points and mount your partitions: mkdir /mnt/{root,home} mount /dev/vgmint/root /mnt/root/ mount /dev/vgmint/home /mnt/home/ What this does: Creates temporary directories to access the filesystems Mounts root and home so we can configure them Step 10: Move Home Directory Contents Move the existing home directory contents from the root partition to the new home partition: mv /mnt/root/home/* /mnt/home/ What this does: Transfers all user files and directories from the old location to the new home partition Preserves your user account settings and any files created during installation Without this step, your home directory would be empty on first boot Step 11: Update fstab Add the home partition to the system's fstab file so it mounts automatically at boot: echo "/dev/mapper/vgmint-home /home ext4 defaults 0 2" >> /mnt/root/etc/fstab What this does: Appends a mount entry to /etc/fstab Ensures /home partition mounts automatically at startup The 0 2 values enable filesystem checks during boot Step 12: Clean Up and Prepare for Reboot Unmount the partitions and deactivate the volume group: umount /mnt/root umount /mnt/home swapoff -a lvchange -an vgmint What this does: Safely unmounts all mounted filesystems Turns off swap Deactivates the volume group to prevent conflicts Ensures everything is properly closed before reboot Step 13: Reboot Now you can safely reboot into your new system: reboot Enter your LUKS encryption password at boot, then log in normally. Verification After rebooting, verify your partition setup: lsblk -f df -h You should see: Root (/) mounted with ~100GB Home (/home) mounted with ~700GB Swap available with 32GB Backup partition ready for Timeshift configuration Setting Up Timeshift To complete your backup solution: Install Timeshift (if not already installed): sudo apt install timeshift Launch Timeshift and select RSYNC mode Choose the backup partition as your snapshot location Configure your backup schedule (daily, weekly, monthly) Create your first snapshot Additional Resources Original blog post on LVM rearrangement Setting up hibernation on Linux Mint Conclusion This setup gives you the best of both worlds: the security of full-disk encryption with LUKS, and the flexibility of custom LVM partitions. Your home directory and system backups are now isolated, making system recovery and upgrades much safer and more manageable. Automating Your Linux Mint Setup After a Fresh Install Automating Your Linux Mint Setup After a Fresh Install Setting up a fresh Linux Mint installation can be time-consuming, especially when you want to replicate your perfect development environment. This guide will show you how to automate the entire process using Ansible and configuration backups, so you can go from a fresh install to a fully configured system in minutes. Why Automate Your Setup? Whether you're setting up a new machine, recovering from a system failure, or just want to maintain consistency across multiple computers, automation offers several key benefits: Time Savings: What normally takes hours can be done in minutes Consistency: Identical setup across all your machines Documentation: Your setup becomes self-documenting Recovery: Quick recovery from system failures Reproducibility: Never forget to install that one crucial tool again Discovering Your Installed Applications Before creating your automation setup, you need to identify which applications you've manually installed since the initial OS installation. This helps you build a complete picture of your custom environment. Finding APT and .deb Packages To see all manually installed packages (excluding those that came with the OS): comm -23
Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan discusses the Agent Client Protocol that makes Zed "Switzerland" for different AI coding agents, and his vision for fine-grained edit tracking that enables permanent, contextual conversations anchored directly to code—a collaborative layer that asynchronous git-based workflows can't provide. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren't going anywhere, and that source code is a language designed for humans to read, not just machines to execute. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
Vim li cas yus thiaj yuav tau tso lwm yam kom thiaj raws cuag yus txoj kev npau suav?
Vim li cas Bangladesh tus qub thawj pwm tsav dhau los thiaj raug txim tuag, thiab puas yuav muaj peev xwm coj nws los raug txim raws li tsev hais plaub ntawm Dhaka tau phua txim?
DSO Overflow S5EP5Saving $20,000 a year by self-hosting a map serverwithVimal PaliwalIn this episode, Vimal Paliwal talks about how he led a migration project that saved his organisation $20,000 annually. He talks about how he overcame challenges he faced resulting from compute and storage demands. Vimal discusses how he ensured cost-efficiency and security by implementing a fully serverless architecture using AWS CloudFront, Lambda authorisers, and WAF, integrating robust domain whitelisting and access control. We finish this conversation by reflecting on lessons learned from this project.Vimal is a part of the AWS Community Builders program, where he actively contributes to knowledge-sharing efforts across the cloud ecosystem by writing on real-world implementations and best practices. In addition, Vim has spent several years as an AWS Authorized Instructor, during which he trained over 1,000 professionals.Resources mentioned in this podcast:Vimal's LinkedIn profileVimal's blog post about this projectVimal's GitHub repoDSO Overflow is a DevSecOps London Gathering production. Find the audio version on all good podcast sources like Spotify, Apple Podcast and Buzzsprout.Your HostsSteve Giguere linkedin.com/in/stevegiguereGlenn Wilson linkedin.com/in/glennwilsonJessica Cregg linkedin.com/in/jessicacregg
Tej kws paub zoo txog tej cai pov puag tib neeg txoj cai nqua hu kom tsoom fwv teb chaws Australia tsim Australia tsab cai Human Rights Act siv rau lub caij tshaj 50 xyoo uas tsis muaj siv thiab muaj kev tawm tsam tsis pom zoo thiab muaj tej xov xwm totaub yuam kev txog tsab cai no. Vim li cas thiaj xav kom tsim siv?
"Kom totaub tias tej neeg no (volunteers) yog cov pab kaws yus tej keeb kwm dhau los, pab yog cov pab tsim lub neej pem suab, thiaj xav kom hwm, qhia thiab muaj feem koom. Vim cov kev ras los ua ib tug pej xeem Australia txhais tsis tau tias yuav ua rau tsis paub tias yus yog leej twg los sis yuav ua rau yus plam yus tej cim thawj. Tsuas yog cov kev pab kom coj tau ntau cov kab lis kev cai thiab ntau tsev neeg los koom peb lub neej uas peb ris txiaj xwb," raws li Teresa Lane uas yog tus haus zos Logan City Council Division 2 hais.
This show has been flagged as Clean by the host. Hello, this is your host, Archer72 for another episode of Hacker Public Radio. In this episode, I continue to fall for the AI trap. Here I was, minding my own business, when I was bothered by the icon only showing a generic icon for the Beeper app. Now, I'm not saying that Duck.ai is not useful, but be very careful what you ask for. It was probably a combination of the early morning, and not reading completely through the AI suggestions, but I ended up losing all icons on the Gnome desktop except for a few like Firefox. I won't leave the problematic command so I don't trip up the listener, but it involved updating a desktop database. This in turn left a dash or blank where the icons should be. If that wasn't bad enough, it was suggested to reset Gnome settings, and nothing was as it seemed before. Things that I had taken for granted were not there. You forget what custom settings are there when mistakes like this are made. So the short answer is that the icons directory, located on my Debian system should be located in .local/share/icons. Instead it was in a sub-directory .local/share/icons/icons Correcting the directory location solved everything, but I was still left to reset my custom Gnome keybindings. • Swap Escape and Caps lock key I used this because I am a Vim user, and this feels more natural when I need to hit Escape to change modes. In Gnome, the setting is under Gnome Tweaks > Keyboard > Additional Layout Options > Swap Esc and Caps Lock Key As of this show release the current stable version is Trixie. Gnome Tweaks - Debian Trixie can be installed by sudo apt install gnome-tweaks on any Debian based system. • Compose key • Compose key shortcuts The Compose key is found at Settings > Keyboard > Compose Key. I selected the Menu key, because this is rarely used, and can still be accessed by the track pad. • Shortcut to open MPV with a clipboard URL from Youtube This can be found in Setting > Keyboard > View and Customize Shortcuts > Custom Shortcuts Shift+Ctrl+P Code placed in /usr/local/bin/ #!/bin/bash ## mpv-url url=`xsel -o -b` echo $url mpv $url Now I can get back to what I started in the first place, creating a .desktop file for Beeper. I created a beeper-desktop.desktop file in `~/.local/share/applications' with the follow contents. [Desktop Entry] Name=Beeper Desktop Exec=/home/mark/AppImages/Beeper-4.1.169.AppImage Icon=/home/mark/.local/share/icons/beeper.png Type=Application Categories=Network;InstantMessaging; Terminal=false StartupWMClass=Beeper The last part of the config file can be found by xprop | grep WM_CLASS Provide feedback on this episode.
This show has been flagged as Clean by the host. Hello, this is Archer72 for Hacker Public Radio. In episode, it seems that AI is a trap. This over-arching generalization is my opinion and may not reflect the opinions of HPR. So the back story to this is that I was listening to the 26 hour Hacker Public Radio New Year's show, and the discussion came up in the Tech and Coffee Telegram Channel My Resolution was to stop using ChatGPT for an AI chat bot, with the implication being to not using AI at all, but instead, to use Duckduckgo and Brave Search Probably less than a week or two later, I was trying to figure out something, and figured that I'd use the easy way and use Claude.ai , which is actually pretty good if you have short and concise questions. I've found that if you have a long drawn-out question, it is better to do a Google or Duck search and document your results. I document in Vim, but you can use whatever is best. This way you can clearly show what works and doesn't work and refer to what you find later, instead of relying on an online service. And sometimes, depending on the AI bot you use, exporting is not very straightforward. With the exception of Duck.ai , that has a button for a quick share of a text file. Then you share it to your self somewhere else like in Proton mail Well… Over the past weekend, I was just making a quick upload button to my own server. The previous weekend, I got HTTPS working. This was just from following the guide on the Let's Encrypt - Documentation and EFF Certbot instruction - Apache2 websites. At least that time, instead of using the AI bot, I just followed clear documentation. See, the thing about going right to the Debian Wiki or the Arch Wiki is that users and developer have already documented plenty. I figured out that part of the hacker method is not to take the ‘easy' way, but to document out what you are trying to learn. So this past weekend, I was trying to learn something about that upload form, and I probably took longer going back and forth with the AI bot than If I had taken the time to search the documentation. And even if it did take longer with the documentation, I would have learned something else and created a Markdown document of my own. There is a tool I use once in a while, which is part of the Duckduckgo search, called Search Assist This can be good, because a have a horrible memory. If there is something small that I can't remember how to do, I let Duck.ai take care of it. But recently, I have turned off the option where it says to sometimes show Search Assist , but instead only when it is on demand. That way I won't be tempted to go down a rabbit hole in order to find what I am looking for. Instead base what I am looking for on standard tools. So Yes, AI is a trap, but is also useful for certain things. But if you are careful how you use it, it's not always a bad thing. This has been Archer72 for Hacker Public Radio. Feel free to comment on this or any other show. Ken says it is the Mana by which we pay our hosts. Also, feel free to record a response show to this or other shows. Provide feedback on this episode.
Кирилл Мокевнин, co-founder школы программирования Hexlet и автор подкаста «Организованное программирование», в гостях у Андрея Смирнова из Weekend Talk. Конференция avito.tech.conf | leads&managers – https://clc.to/p0dRAA Телеграм-канал Андрея Смирнова – https://t.me/itsmirnov 00:00 Начало 00:31 Чем можешь быть известен моей аудитории? 00:52 Рекламная пауза 02:01 Почему не ушёл от инженерного мышления, даже став предпринимателем? 13:33 Что спровоцировало переход из найма в своё дело? 17:46 Почему Хекслет появился раньше EdTech рынка и не стал бизнесом сразу? 29:27 Зачем был создан Хекслет.Клуб и что за комьюнити ты строишь? 42:09 Как AI, кризис EdTech и твой личный бренд влияют на Хекслет сейчас? 50:15 Почему Vim – это больше, чем редактор? 55:47 Знание каких языков программирования важно для создания инженерного mindset'а? 1:04:01 Кем бы ты стал, если бы не было IT-сферы? 1:05:03 Почему стоит переехать в Майами? 1:07:27 В чём сейчас главная проблема современного IT? Ссылки по теме: 1) Телеграм-канал Кирилла – https://t.me/orgprog 2) Подкаст «Организованное программирование» – https://youtube.com/@mokevnin 3) Сайт школы Hexlet – https://hexlet.io
Access to safe drinking water is essential, and Australia's often harsh environment means that our drinking water supplies are especially precious. With differences in the availability and quality of drinking water across the country, how do we know if it's safe to drink? In this episode we get water experts to answer this question and more. - Cov kev tau tej dej huv siv uas tau txais kev nyab xeeb yog tej yam tseem ceeb, thiab Australia yog ib lub teb chaws uas muaj tej ib puag ncig yuav ua rau yus raug tau teeb meem, txhais tias peb tej dej haus yog ib co tseem ceeb heev. Vim muaj tej dej ntawm tej chaw sib txawv thiab muaj tej dej qab txawv thoob plaws ntawm lub teb chaws no, ces peb ho paub li cas tias tej dej ntawd yog cov dej haus tau txais kev nyab xeeb? Nyob rau toom xov xwm no peb coj ib co kws paub txog dej los teb tej lus nug no thiab ntau yam.
Vim li cas Neeb Thoj (Neng Thao) thiaj xav kos khaws thiab ceev txuag Hmoob tej paj nruag thiab tej twj paj nruag thiab phom Hmoob khaws tseg cia?
Today we have a bit of a fun show, we have the developer of a relatively new project called Lilly on the show, a Vim like text editor that spawned out of the performance issues with heavy use of Vim Plugins==========Support The Channel==========► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson==========Guest Links==========Repo: https://github.com/tauraamui/lillyWebsite: https://tauraamui.website/==========Support The Show==========► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson=========Video Platforms==========
We spent the week learning keybindings, installing dependencies, and cramming for bonus points. Today, we score up and see how we did in the TUI Challenge.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
Vim li cas Australia tej neeg txum tim thiaj nqua hu kom los qhia qhov tseeb txog Australia tej neeg txum tim tej keeb kwm thiab hais kom tej nom tswv kub siab nqes tes los daws ntau yam teeb meem kom lawv thiaj tsis txhob plam txiaj ntsim.
Our terminal apps are loaded, the goals are set, but we're already hitting a few snags. The TUI Challenge begins...Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
The crew descend upon the Iron Garden in a last-ditch effort to find what fate awaits Vim and Regina. As answers become questions and questions become paradoxes, the only thing left to ask is: are the NHPs feeling okay? Matcha exhausts a dialogue tree. Moxie gets digging. Roadkill shows the big iron on their hip. Silver dreams of cannibalism.Thank you to our Patreon supporters: Austin Sietsema, Eleanor Lady, Daffodil, RiverSupportsArtists, Gnome, Creaux, James Knevitt, Rocky Loy, Tau, JC Darcy, Robert Ruthven, AlexCrow, Jordan Myers, Keenan Geyer, DragonGirlJosie, Shonn Briscoe, Andor, & Diana Plante!!Bring Your Own Mech is a biweekly Lancer RPG actual play podcast of four Lancers thrown together by circumstance, destiny... and credits.Featuring: Reed (@ReedPlays) as the Game Master Amelia (@amelia_g_music) as Matcha Aki (@akinomii_art) as Moxie Dusty (@Dustehill) as Roadkill Aubrey (@MadQueenCosplay) as SilverFind us on Bluesky @bringyourownmech.bsky.social, and remember: batteries are not included.Lancer is created by Tom Parkinson Morgan (author of Kill Six Billion Demons) and Miguel Lopez of Massif Press. Bring Your Own Mech is not an official Lancer product; it is a third party work, and is not affiliated with Massif Press. Bring Your Own Mech is published via the Lancer Third Party License. Lancer is copyright Massif Press. Support the official release at https://massif-press.itch.ioSupport us on Patreon! https://www.patreon.com/bringyourownmechGet the official season 1 album, Bring Your Own Mixtape vol. 1! https://ownmech.bandcamp.com/album/bring-your-own-mixtape-vol-1DRC CUSTOM OUTFITTERS Download: https://ownmech.itch.io/drc-custom-outfitters-a-lancer-supplementPilot NET Discord Server: https://discord.gg/p3p8FUm9b4
In this episode of Gradient Dissent, Lukas Biewald talks with Martin Shkreli — the infamous "pharma bro" turned founder — about his path from hedge fund manager and pharma CEO to convicted felon and now software entrepreneur. Shkreli shares his side of the drug pricing controversy, reflects on his prison experience, and explains how he rebuilt his life and business after being "canceled."They dive deep into AI and drug discovery, where Shkreli delivers a strong critique of mainstream approaches. He also talks about his latest venture in finance software, building Godel Terminal “a Vim for traders", and why he thinks the AI hype cycle is just beginning. It's a wide-ranging and candid conversation with one of the most controversial figures in tech and biotech.Follow Martin Shkreli on TwitterGodel Terminal: https://godelterminal.com/Follow Weights & Biases on Twitterhttps://www.linkedin.com/company/wandb Join the Weights & Biases Discord Server:https://discord.gg/CkZKRNnaf3
Arcadia June is coming, don't let it pass you by! We chat about photo libraries, Andrew is a bike guy again, and Martin shares a fun story about his son using the Mac! Ring the bell, a new One Prime Plus Dot Com member has entered the room! Stuff-ups and Shout-outs! 00:00:00 Imagine what Andrew said in those thirty seconds..
Hej Somna.Detta är fortsättningen. Du vet, den där berättelsen om Vim och Vindel som började som ett romantiskt infall i Gamla stan och sen växte till ett helt epos med fäktning, kastanjer, kastanjekyssar och en baron som luktar vinbål. Det är alltså del två. Och ja, jag kanske gick lite... all in.Vi får följa hur Vim och Vindel famlar sig fram fram över klassgränser, passionerade blickar och nattliga räddningsaktioner. Bellman dyker upp, som någon slags romantikens ande, och dikter flyger som förälskade fjärilar. Det blir sabelstrid i kyrkan, droppande blod, kunglig intervention och kanske – bara kanske – ett lyckligt slut.Jag vet inte. Det blev stort. Men ibland måste man bara följa vågen bakom ögonen.Sov gott!Mer om Henrik, klicka här: https://linktr.ee/HenrikstahlLyssna utan reklam, få extraavsnitt, spellistor med mera på: https://somnamedhenrik.supercast.com/ Hosted on Acast. See acast.com/privacy for more information.
In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor's creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.⏳Timestamps:00:00 How Cursor got started and why it took off04:50 Switching from Vim to VS Code and the rise of CoPilot08:10 Why Cursor won among competitors: product philosophy and execution10:30 How user data and feedback loops drive Cursor's improvements12:20 Iterating on AI agents: what made Cursor hold back and wait13:30 Competitive coding background: advantage or challenge?16:30 Making coding fun again: latency, flow, and model choices19:10 Building Cursor's infrastructure: from GPUs to indexing billions of files26:00 How Cursor prioritizes compute allocation for indexing30:00 Running massive ML infrastructure: surprises and scaling lessons34:50 Why Cursor chose DeepSeek models early36:00 Where AI agents are heading next40:07 Debugging and evaluating complex AI agents42:00 How coding workflows will change over the next 2–3 years46:20 Dream future projects: AI for reading codebases and papers
Varun Mohan is the co-founder and CEO of Windsurf (formerly Codeium), an AI-powered development environment (IDE) that has been used by over 1 million developers in just four months and has quickly emerged as a leader in transforming how developers build software. Prior to finding success with Windsurf, the company pivoted twice—first from GPU virtualization infrastructure to an IDE plugin, and then to their own standalone IDE.In this conversation, you'll learn:1. Why Windsurf walked away from a profitable GPU infrastructure business and bet the company on helping engineers code2. The surprising UI discovery that tripled adoption rates overnight.3. The secret behind Windsurf's B2B enterprise plan, and why they invested early in an 80-person sales team despite conventional startup wisdom.4. How non-technical staff at Windsurf built their own custom tools instead of purchasing SaaS products, saving them over $500k in software costs5. Why Varun believes 90% of code will be AI-generated, but engineering jobs will actually increase6. How training on millions of incomplete code samples gives Windsurf an edge, and creates a moat long-term7. Why agency is the most undervalued and important skill in the AI era—Brought to you by:• Brex—The banking solution for startups• Productboard—Make products that matter• Coda—The all-in-one collaborative workspace—Where to find Varun Mohan:• X: https://x.com/_mohansolo• LinkedIn: https://www.linkedin.com/in/varunkmohan/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Varun's background(03:57) Building and scaling Windsurf(12:58) Windsurf: The new purpose-built IDE to harness magic(17:11) The future of engineering and AI(21:30) Skills worth investing in(23:07) Hiring philosophy and company culture(35:22) Sales strategy and market position(39:37) JetBrains vs. VS Code: extensibility and enterprise adoption(41:20) Live demo: building an Airbnb for dogs with Windsurf(42:46) Tips for using Windsurf effectively(46:38) AI's role in code modification and review(48:56) Empowering non-developers to build custom software(54:03) Training Windsurf(01:00:43) Windsurf's unique team structure and product strategy(01:06:40) The importance of continuous innovation(01:08:57) Final thoughts and advice for aspiring developers—Referenced:• Windsurf: https://windsurf.com/• VS Code: https://code.visualstudio.com/• JetBrains: https://www.jetbrains.com/• Eclipse: https://eclipseide.org/• Visual Studio: https://visualstudio.microsoft.com/• Vim: https://www.vim.org/• Emacs: https://www.gnu.org/software/emacs/• Lessons from a two-time unicorn builder, 50-time startup advisor, and 20-time company board member | Uri Levine (co-founder of Waze): https://www.lennysnewsletter.com/p/lessons-from-uri-levine• IntelliJ: https://www.jetbrains.com/idea/• Julia: https://julialang.org/• Parallel computing: https://en.wikipedia.org/wiki/Parallel_computing• Douglas Chen on LinkedIn: https://www.linkedin.com/in/douglaspchen/• Carlos Delatorre on LinkedIn: https://www.linkedin.com/in/cadelatorre/• MongoDB: https://www.mongodb.com/• Cursor: https://www.cursor.com/• GitHub Copilot: https://github.com/features/copilot• Llama: https://www.llama.com/• Mistral: https://mistral.ai/• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• React: https://react.dev/• Sonnet: https://www.anthropic.com/claude/sonnet• OpenAI: https://openai.com/• FedRamp: https://www.fedramp.gov/• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Amdahl's law: https://en.wikipedia.org/wiki/Amdahl%27s_law• How to win in the AI era: Ship a feature every week, embrace technical debt, ruthlessly cut scope, and create magic your competitors can't copy | Gaurav Misra (CEO and co-founder of Captions): https://www.lennysnewsletter.com/p/how-to-win-in-the-ai-era-gaurav-misra—Recommended book:• Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs: https://www.amazon.com/Fall-Love-Problem-Solution-Entrepreneurs/dp/1637741987—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
We've done hot takes episodes in the past but this is different, it's hot questions. Would we rather have bad managers who can code or good managers who can't? Too many comments or none? 80 columns or as long as you like? What editor do we use and why? Vim for Fun or PeerTube version... Read More
We've done hot takes episodes in the past but this is different, it's hot questions. Would we rather have bad managers who can code or good managers who can't? Too many comments or none? 80 columns or as long as you like? What editor do we use and why? Vim for Fun or PeerTube version... Read More
The beach episode approaches! After jailbreaks and microscopic misadventures. the gang tries to wind down with some sun and sea. As Regina prepares to tell them her story, will our crew be able to find any lead on the Iron Garden? And more importantly, will they survive Vim's pranks before then? Matcha digs her own grave. Moxie enters all-range mode. Roadkill goes thrifting. Silver makes some introductions.Thank you to our Patreon supporters: Austin Sietsema, Eleanor Lady, Daffodil, RiverSupportsArtists, Gnome, Creaux, James Knevitt, Rocky Loy, Tau, JC Darcy, Robert Ruthven, AlexCrow, Jordan Myers, Keenan Geyer, DragonGirlJosie, Shonn Briscoe, Andor, & Diana Plante!!Bring Your Own Mech is a biweekly Lancer RPG actual play podcast of four Lancers thrown together by circumstance, destiny... and credits.Featuring: Reed (@ReedPlays) as the Game Master Amelia (@amelia_g_music) as Matcha Aki (@akinomii_art) as Moxie Dusty (@Dustehill) as Roadkill Aubrey (@MadQueenCosplay) as ScarlettFind us on Bluesky @bringyourownmech.bsky.social, and remember: batteries are not included.Lancer is created by Tom Parkinson Morgan (author of Kill Six Billion Demons) and Miguel Lopez of Massif Press. Bring Your Own Mech is not an official Lancer product; it is a third party work, and is not affiliated with Massif Press. Bring Your Own Mech is published via the Lancer Third Party License. Lancer is copyright Massif Press. Support the official release at https://massif-press.itch.ioSupport us on Patreon! https://www.patreon.com/bringyourownmechGet the official season 1 album, Bring Your Own Mixtape vol. 1! https://ownmech.bandcamp.com/album/bring-your-own-mixtape-vol-1DRC CUSTOM OUTFITTERS Download: https://ownmech.itch.io/drc-custom-outfitters-a-lancer-supplementPilot NET Discord Server: https://discord.gg/p3p8FUm9b4
Bem amigos do Pelada na Net, chegamos em definitivo para o programa 722! E hoje temos o Príncipe Vidane, Show do Vitinho e Maidana aprendendo novas siglas com Dudu. E neste programa comentamos a metida de louco que a defesa de Dudu incluiu no processo contra Leila Pereira que ressignificou VTNC como "Vim trabalhar no Cruzeiro", falamos sobre o escândalo envolvendo Bruno Henrique que foi indiciado pela Polícia Federal por envolvimento com esquema de apostas, analisamos tudo sobre o fim das quartas da Champions League em que o Arsenal despachou o Real Madrid e se classificou pras semis ao lado de PSG, Barcelona e Inter de Milão, além de muito mais! E não se esqueça de usar as Hashtags: #BRITAJR Entre em nosso site! https://peladananet.com.brSiga nosso Bluesky! @peladananet.com.brSiga nosso Twitter! @PeladaNETSiga nosso Instagram! @PeladaNaNetParticipe do nosso grupo no TELEGRAM! https://t.me/padegostosodemaisCompre nossos produtos na Podcast Store! Temos camisetas, canecas, ímãs, pôsteres, bottons e ecobags disponíveis! Titulares:Fernando Maidana – Twitter / Instagram / BlueskyVictor “Show do Vitinho” Raphael – Twitter / Instagram / BlueskyVitor “Príncipe Vidane” Faglioni Rossi – Twitter / Instagram / Bluesky Projetos paralelos:Jovem NerdMau Acompanhado – no Jovem NerdFeed do Mau Acompanhado no SpotifyDentro da Minha CabeçaReinaldo JaquelineCanal do Versão Brasihueira no YouTubePauta Livre NewsCanal do Victinho no YoutubeRede ChorumeFábrica de FilmesLegião dos HeróisNoites com Maidana Ouça também:Frango FinoPapo DelasRadiofobiaThe Dark One – PodtrashVortex – com Kat Barcelos Contribua com o Peladinha:Apoia.sePatreonOu através da nossa chave pix: podcast@peladananet.com.br Colaboradores de Março/2025!Fica aqui o nosso agradecimento pelo carinho, dedicação e investimento aos queridos: Adriana Cristina Alves Pinto Gioielli | Adriano Nazário | André Vinícius De Carvalho Costa | Fellipe Miranda | Fernando Costa Campos | Gabriel Machado De Freitas | Guilherme Rezende Soria | Gustavo Henrique Rossini | Heverton Coneglian De Freitas | Higor Nunes Resende | Higor Pêgas Rosa De Faria | Igor Leite Da Silva | Igor Zacarias Dos Santos | Ítalo Leandro Freire De Albuquerque | João Paulo Lobo Marins | Joao Pedro Barros Barbosa | Leonardo Delefrate | Luis Henrique Santos | Luiz Guilherme Borges Silva | Messias Feitosa Santana | Pedro Marcelo Rocha Gomes | Rafael Brandão Brasil | Rena Marcon | Renato Grigoli Pereira | Thais Cristine Cavalcanti | Vanessa Fontana | Welton Sousa Gouveia | André Stábile | Arthur Takeshi Gonçalves Murakawa | Brayan Ksenhuck | Bruno Burkart | Caio Mandolesi | Concílio Silva | Daniel Lucas Martins Lacerda | Davi Andrade | Fabio Simoes | Fabio Simoes | Filipi Froufe | Flavio Barbosa | George Alfradique | Gustavo Marques Leite | Heitor Dias | Igor Trusz | Jhonathan Romão | João Gabriel Paduan Tristante | Josué Solano De Barros | Leonardo Lachi Manetti | Listen2urs2 (Listen Tchu Iór Rârrtchi)) | Lucas Freitas | Luis Alberto De Seixas Buttes | Matheus De Sales Freitas | Pedro Lauria | Rafael Gomes Da Silva | Robson De Sousa | Rodrigo Pimentel | Tiago Vital Urgal | Tio Patux | Vander Carlos Ribeiro Vilanova | Vinicius Renan Lauermann Moreira | Vinicius Verissimo Lopes | Thiago Lins | Hassan Jorge | Diego Santos | Felipe Avelar | Leonardo Motta | Felipe Pastor | Bruno Franzini | David Gilvan | Luiz Strina | Adryel Romeiro | Aline Aparecida Matias | Antonino Firmino Da Silva Neto | Antonio Augusto Mendes Rodrigues | Bruno Kellton | Bruno Marques Monteiro | Carlos Eduardo Ardigo | Daniel Pandeló Corrêa | Elisnei Menezes De Oliveira | Evilasio Costa Junior | Felipe Brasil | Felipe Duarte | Fernando Bilhiere | Fernando De Araujo Brandão Filho | Gabriel Frizzo | Gabriel Lecomte | Gabriel Lopes Dos Santos | Gian Luca Barbosa Mainini | Jailson Gomes | João Pedro Machareth | Jose Wellington De Moura Melo | Leandro Jose De Souza | Leonardo Giehl | Luan Germano | Luca Vianna | Marcelo São Martinho Cabral | Marco Antônio Maassen Da Silva | Marianna Feitosa | Matheus Andion De Souza Vitorino | Matheus Bezerra Lucas Bittencourt | Maxwell Dos Santos Nelle | Pedro Bonifácio | Pedro Henrique Tonetto Lopes | Pollyana Bruno | Rafael Manenti | Rafael Matis | Rainer Almeida | Raphael Piccoli | Raphael Pini Bubinick | Rodrigo Oliveira Porto | Stéfano Bellote | Thiago Nogueira Marcal | Thomas Rodrigues | Tiago Weiss | Vinicius Athanasopoulos | Vinícius Lima Silva | Vinícius Ramalho | Vitor Carnelosso Varella | Vitor Motta Vigerelli | Wendel Ferreira Santiago | Wladimir Araújo Neto | Marco Antônio Rodrigues Júnior (Markão) | Leonardo Pimentel | Bruno Macedo | Aquila Barros Nogueira | Danilo Da Silva Pereira | Henrique Zani | Pedro Henrique De Paula Lemos | Victor Rodrigues | Daniel Moreira | Lucas Penetra | Lucas, O Fofo | Albert José | Raphael De Souza | Thiago Goncales | Daniel Ferreira De Lima Vilha | Felipe Artemio | Joseane Freitas Santos | Tatiane Oliveira Ferreira | Bruno Vieira Silva | Itallo Rossi Lucas | Isabelle Zacara Obrigado por acreditarem em nós! Comente!Envie sua cartinha via e-mail para podcast@peladananet.com.br e comente no post do Instagram com a capa deste episódio!See omnystudio.com/listener for privacy information.
We are happy to announce that there will be a dedicated MCP track at the 2025 AI Engineer World's Fair, taking place Jun 3rd to 5th in San Francisco, where the MCP core team and major contributors and builders will be meeting. Join us and apply to speak or sponsor!When we first wrote Why MCP Won, we had no idea how quickly it was about to win.In the past 4 weeks, OpenAI and now Google have now announced the MCP support, effectively confirming our prediction that MCP was the presumptive winner of the agent standard wars. MCP has now overtaken OpenAPI, the incumbent option and most direct alternative, in GitHub stars (3 months ahead of conservative trendline):We have explored the state of MCP at AIE (now the first ever >100k views workshop):And since then, we've added a 7th reason why MCP won - this team acts very quickly on feedback, with the 2025-03-26 spec update adding support for stateless/resumable/streamable HTTP transports, and comprehensive authz capabilities based on OAuth 2.1.This bodes very well for the future of the community and project. For protocol and history nerds, we also asked David and Justin to tell the origin story of MCP, which we leave to the reader to enjoy (you can also skim the transcripts, or, the changelogs of a certain favored IDE). It's incredible the impact that individual engineers solving their own problems can have on an entire industry.Full video episodeLike and subscribe on YouTube!Show Links* David* Justin* MCP* Why MCP WonTimestamps* 00:00 Introduction and Guest Welcome* 00:37 What is MCP?* 02:00 The Origin Story of MCP* 05:18 Development Challenges and Solutions* 08:06 Technical Details and Inspirations* 29:45 MCP vs Open API* 32:48 Building MCP Servers* 40:39 Exploring Model Independence in LLMs* 41:36 Building Richer Systems with MCP* 43:13 Understanding Agents in MCP* 45:45 Nesting and Tool Confusion in MCP* 49:11 Client Control and Tool Invocation* 52:08 Authorization and Trust in MCP Servers* 01:01:34 Future Roadmap and Stateless Servers* 01:10:07 Open Source Governance and Community Involvement* 01:18:12 Wishlist and Closing RemarksTranscriptAlessio [00:00:02]: Hey, everyone. Welcome back to Latent Space. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:10]: Hey, morning. And today we have a remote recording, I guess, with David and Justin from Anthropic over in London. Welcome. Hey, good You guys have created a storm of hype because of MCP, and I'm really glad to have you on. Thanks for making the time. What is MCP? Let's start with a crisp what definition from the horse's mouth, and then we'll go into the origin story. But let's start off right off the bat. What is MCP?Justin/David [00:00:43]: Yeah, sure. So Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins, basically. The terminology is a bit different. We use this client-server terminology, and we can talk about why that is and where that came from. But at the end of the day, it really is that. It's like extending and enhancing the functionality of AI application.swyx [00:01:05]: David, would you add anything?Justin/David [00:01:07]: Yeah, I think that's actually a good description. I think there's like a lot of different ways for how people are trying to explain it. But at the core, I think what Justin said is like extending AI applications is really what this is about. And I think the interesting bit here that I want to highlight, it's AI applications and not models themselves that this is focused on. That's a common misconception that we can talk about a bit later. But yeah. Another version that we've used and gotten to like is like MCP is kind of like the USB-C port of AI applications and that it's meant to be this universal connector to a whole ecosystem of things.swyx [00:01:44]: Yeah. Specifically, an interesting feature is, like you said, the client and server. And it's a sort of two-way, right? Like in the same way that said a USB-C is two-way, which could be super interesting. Yeah, let's go into a little bit of the origin story. There's many people who've tried to make statistics. There's many people who've tried to build open source. I think there's an overall, also, my sense is that Anthropic is going hard after developers in the way that other labs are not. And so I'm also curious if there was any external influence or was it just you two guys just in a room somewhere riffing?Justin/David [00:02:18]: It is actually mostly like us two guys in a room riffing. So this is not part of a big strategy. You know, if you roll back time a little bit and go into like July 2024. I was like, started. I started at Anthropic like three months earlier or two months earlier. And I was mostly working on internal developer tooling, which is what I've been doing for like years and years before. And as part of that, I think there was an effort of like, how do I empower more like employees at Anthropic to use, you know, to integrate really deeply with the models we have? Because we've seen these, like, how good it is, how amazing it will become even in the future. And of course, you know, just dogfoot your own model as much as you can. And as part of that. From my development tooling background, I quickly got frustrated by the idea that, you know, on one hand side, I have Cloud Desktop, which is this amazing tool with artifacts, which I really enjoyed. But it was very limited to exactly that feature set. And it was there was no way to extend it. And on the other hand side, I like work in IDEs, which could greatly like act on like the file system and a bunch of other things. But then they don't have artifacts or something like that. And so what I constantly did was just copy. Things back and forth on between Cloud Desktop and the IDE, and that quickly got me, honestly, just very frustrated. And part of that frustration wasn't like, how do I go and fix this? What, what do we need? And back to like this development developer, like focus that I have, I really thought about like, well, I know how to build all these integrations, but what do I need to do to let these applications let me do this? And so it's very quickly that you see that this is clearly like an M times N problem. Like you have multiple like applications. And multiple integrations you want to build and like, what that is better there to fix this than using a protocol. And at the same time, I was actually working on an LSP related thing internally that didn't go anywhere. But you put these things together in someone's brain and let them wait for like a few weeks. And out of that comes like the idea of like, let's build some, some protocol. And so back to like this little room, like it was literally just me going to a room with Justin and go like, I think we should build something like this. Uh, this is a good idea. And Justin. Lucky for me, just really took an interest in the idea, um, and, and took it from there to like, to, to build something, together with me, that's really the inception story is like, it's us to, from then on, just going and building it over, over the course of like, like a month and a half of like building the protocol, building the first integration, like Justin did a lot of the, like the heavy lifting of the first integrations in cloud desktop. I did a lot of the first, um, proof of concept of how this can look like in an IDE. And if you, we could talk about like some of. All the tidbits you can find way before the inception of like before the official release, if you were looking at the right repositories at the right time, but there you go. That's like some of the, the rough story.Alessio [00:05:12]: Uh, what was the timeline when, I know November 25th was like the official announcement date. When did you guys start working on it?Justin/David [00:05:19]: Justin, when did we start working on that? I think it, I think it was around July. I think, yeah, I, as soon as David pitched this initial idea, I got excited pretty quickly and we started working on it, I think. I think almost immediately after that conversation and then, I don't know, it was a couple, maybe a few months of, uh, building the really unrewarding bits, if we're being honest, because for, for establishing something that's like this communication protocol has clients and servers and like SDKs everywhere, there's just like a lot of like laying the groundwork that you have to do. So it was a pretty, uh, that was a pretty slow couple of months. But then afterward, once you get some things talking over that wire, it really starts to get exciting and you can start building. All sorts of crazy things. And I think this really came to a head. And I don't remember exactly when it was, maybe like approximately a month before release, there was an internal hackathon where some folks really got excited about MCP and started building all sorts of crazy applications. I think the coolest one of which was like an MCP server that can control a 3d printer or something. And so like, suddenly people are feeling this power of like cloud connecting to the outside world in a really tangible way. And that, that really added some, uh, some juice to us and to the release.Alessio [00:06:32]: Yeah. And we'll go into the technical details, but I just want to wrap up here. You mentioned you could have seen some things coming if you were looking in the right places. We always want to know what are the places to get alpha, how, how, how to find MCP early.Justin/David [00:06:44]: I'm a big Zed user. I liked the Zed editor. The first MCP implementation on an IDE was in Zed. It was written by me and it was there like a month and a half before the official release. Just because we needed to do it in the open because it's an open source project. Um, and so it was, it was not, it was named slightly differently because we. We were not set on the name yet, but it was there.swyx [00:07:05]: I'm happy to go a little bit. Anthropic also had some preview of a model with Zed, right? Some kind of fast editing, uh, model. Um, uh, I, I'm con I confess, you know, I'm a cursor windsurf user. Haven't tried Zed. Uh, what's, what's your, you know, unrelated or, you know, unsolicited two second pitch for, for Zed. That's a good question.Justin/David [00:07:28]: I, it really depends what you value in editors. For me. I, I wouldn't even say I like, I love Zed more than others. I like them all like complimentary in, in a way or another, like I do use windsurf. I do use Zed. Um, but I think my, my main pitch for Zed is low latency, super smooth experience editor with a decent enough AI integration.swyx [00:07:51]: I mean, and maybe, you know, I think that's, that's all it is for a lot of people. Uh, I think a lot of people obviously very tied to the VS code paradigm and the extensions that come along with it. Okay. So I wanted to go back a little bit. You know, on, on, on some of the things that you mentioned, Justin, uh, which was building MCP on paper, you know, obviously we only see the end result. It just seems inspired by LSP. And I, I think both of you have acknowledged that. So how much is there to build? And when you say build, is it a lot of code or a lot of design? Cause I felt like it's a lot of design, right? Like you're picking JSON RPC, like how much did you base off of LSP and, and, you know, what, what, what was the sort of hard, hard parts?Justin/David [00:08:29]: Yeah, absolutely. I mean, uh, we, we definitely did take heavy inspiration from LSP. David had much more prior experience with it than I did working on developer tools. So, you know, I've mostly worked on products or, or sort of infrastructural things. LSP was new to me. But as a, as a, like, or from design principles, it really makes a ton of sense because it does solve this M times N problem that David referred to where, you know, in the world before LSP, you had all these different IDEs and editors, and then all these different languages that each wants to support or that their users want them to support. And then everyone's just building like one. And so, like, you use Vim and you might have really great support for, like, honestly, I don't know, C or something, and then, like, you switch over to JetBrains and you have the Java support, but then, like, you don't get to use the great JetBrains Java support in Vim and you don't get to use the great C support in JetBrains or something like that. So LSP largely, I think, solved this problem by creating this common language that they could all speak and that, you know, you can have some people focus on really robust language server implementations, and then the IDE developers can really focus on that side. And they both benefit. So that was, like, our key takeaway for MCP is, like, that same principle and that same problem in the space of AI applications and extensions to AI applications. But in terms of, like, concrete particulars, I mean, we did take JSON RPC and we took this idea of bidirectionality, but I think we quickly took it down a different route after that. I guess there is one other principle from LSP that we try to stick to today, which is, like, this focus on how features manifest. More than. The semantics of things, if that makes sense. David refers to it as being presentation focused, where, like, basically thinking and, like, offering different primitives, not because necessarily the semantics of them are very different, but because you want them to show up in the application differently. Like, that was a key sort of insight about how LSP was developed. And that's also something we try to apply to MCP. But like I said, then from there, like, yeah, we spent a lot of time, really a lot of time, and we could go into this more separately, like, thinking about each of the primitives that we want to offer in MCP. And why they should be different, like, why we want to have all these different concepts. That was a significant amount of work. That was the design work, as you allude to. But then also already out of the gate, we had three different languages that we wanted to at least support to some degree. That was TypeScript, Python, and then for the Z integration, it was Rust. So there was some SDK building work in those languages, a mixture of clients and servers to build out to try to create this, like, internal ecosystem that we could start playing with. And then, yeah, I guess just trying to make everything, like, robust over, like, I don't know, this whole, like, concept that we have for local MCP, where you, like, launch subprocesses and stuff and making that robust took some time as well. Yeah, maybe adding to that, I think the LSP inference goes even a little bit further. Like, we did take actually quite a look at criticisms on LSP, like, things that LSP didn't do right and things that people felt they would love to have different and really took that to heart to, like, see, you know, what are some of the things. that we wish, you know, we should do better. We took a, you know, like, a lengthy, like, look at, like, their very unique approach to JSON RPC, I may say, and then we decided that this is not what we do. And so there's, like, these differences, but it's clearly very, very inspired. Because I think when you're trying to build and focus, if you're trying to build something like MCP, you kind of want to pick the areas you want to innovate in, but you kind of want to be boring about the other parts in pattern matching LSP. So the problem allows you to be boring in a lot of the core pieces that you want to be boring in. Like, the choice of JSON RPC is very non-controversial to us because it's just, like, it doesn't matter at all, like, what the action, like, bites on the bar that you're speaking. It makes no difference to us. The innovation is on the primitives you choose and these type of things. And so there's way more focus on that that we wanted to do. So having some prior art is good there, basically.swyx [00:12:26]: It does. I wanted to double click. I mean, there's so many things you can go into. Obviously, I am passionate about protocol design. I wanted to show you guys this. I mean, I think you guys know, but, you know, you already referred to the M times N problem. And I can just share my screen here about anyone working in developer tools has faced this exact issue where you see the God box, basically. Like, the fundamental problem and solution of all infrastructure engineering is you have things going to N things, and then you put the God box and they'll all be better, right? So here is one problem for Uber. One problem for... GraphQL, one problem for Temporal, where I used to work at, and this is from React. And I was just kind of curious, like, you know, did you solve N times N problems at Facebook? Like, it sounds like, David, you did that for a living, right? Like, this is just N times N for a living.Justin/David [00:13:16]: David Pérez- Yeah, yeah. To some degree, for sure. I did. God, what a good example of this, but like, I did a bunch of this kind of work on like source control systems and these type of things. And so there were a bunch of these type of problems. And so you just shove them into something that everyone can read from and everyone can write to, and you build your God box somewhere, and it works. But yeah, it's just in developer tooling, you're absolutely right. In developer tooling, this is everywhere, right?swyx [00:13:47]: And that, you know, it shows up everywhere. And what was interesting is I think everyone who makes the God box then has the same set of problems, which is also you now have like composability off and remotes versus local. So, you know, there's this very common set of problems. So I kind of want to take a meta lesson on how to do the God box, but, you know, we can talk about the sort of development stuff later. I wanted to double click on, again, the presentation that Justin mentioned of like how features manifest and how you said some things are the same, but you just want to reify some concepts so they show up differently. And I had that sense, you know, when I was looking at the MCP docs, I'm like, why do these two things need to be the difference in other? I think a lot of people treat tool calling as the solution to everything, right? And sometimes you can actually sort of view kinds of different kinds of tool calls as different things. And sometimes they're resources. Sometimes they're actually taking actions. Sometimes they're something else that I don't really know yet. But I just want to see, like, what are some things that you sort of mentally group as adjacent concepts and why were they important to you to emphasize?Justin/David [00:14:58]: Yeah, I can chat about this a bit. I think fundamentally we every sort of primitive that we thought through, we thought from the perspective of the application developer first, like if I'm building an application, whether it is an IDE or, you know, call a desktop or some agent interface or whatever the case may be, what are the different things that I would want to receive from like an integration? And I think once you take that lens, it becomes quite clear that that tool calling is necessary, but very insufficient. Like there are many other things you would want to do besides just get tools. And plug them into the model and you want to have some way of differentiating what those different things are. So the kind of core primitives that we started MCP with, we've since added a couple more, but the core ones are really tools, which we've already talked about. It's like adding, adding tools directly to the model or function calling is sometimes called resources, which is basically like bits of data or context that you might want to add to the context. So excuse me, to the, to the model context. And this, this is the first primitive where it's like, we, we. Decided this could be like application controlled, like maybe you want a model to automatically search through and, and find relevant resources and bring them into context. But maybe you also want that to be an explicit UI affordance in the application where the user can like, you know, pick through a dropdown or like a paperclip menu or whatever, and find specific things and tag them in. And then that becomes part of like their message to the LLM. Like those are both use cases for resources. And then the third one is prompts. Which are deliberately meant to be like user initiated or. Like. User substituted. Text or messages. So like the analogy here would be like, if you're an editor, like a slash command or something like that, or like an at, you know, auto completion type thing where it's like, I have this kind of macro effectively that I want to drop in and use. And we have sort of expressed opinions through MCP about the different ways that these things could manifest, but ultimately it is for application developers to decide, okay, you, you get these different concepts expressed differently. Um, and it's very useful as an application developer because you can decide. The appropriate experience for each, and actually this can be a point of differentiation to, like, we were also thinking, you know, from the application developer perspective, they, you know, application developers don't want to be commoditized. They don't want the application to end up the same as every other AI application. So like, what are the unique things that they could do to like create the best user experience even while connecting up to this big open ecosystem of integration? I, yeah. And I think to add to that, the, I think there are two, two aspects to that, that I want to. I want to mention the first one is that interestingly enough, like while nowadays tool calling is obviously like probably like 95% plus of the integrations, and I wish there would be, you know, more clients doing tool resources, doing prompts. The, the very first implementation in that is actually a prompt implementation. It doesn't deal with tools. And, and it, we found this actually quite useful because what it allows you to do is, for example, build an MCP server that takes like a backtrack. So it's, it's not necessarily like a tool that literally just like rawizes from Sentry or any other like online platform that, that tracks your, your crashes. And just lets you pull this into the context window beforehand. And so it's quite nice that way that it's like a user driven interaction that you does the user decide when to pull this in and don't have to wait for the model to do it. And so it's a great way to craft the prompt in a way. And I think similarly, you know, I wish, you know, more MCP servers today would bring prompts as examples of, like how to even use the tools. Yeah. at the same time. The resources bits are quite interesting as well. And I wish we would see more usage there because it's very easy to envision, but yet nobody has really implemented it. A system where like an MCP server exposes, you know, a set of documents that you have, your database, whatever you might want to as a set of resources. And then like a client application would build a full rack index around this, right? This is definitely an application use case we had in mind as to why these are exposed in such a way that they're not model driven, because you might want to have way more resource content than is, you know, realistically usable in a context window. And so I think, you know, I wish applications and I hope applications will do this in the next few months, use these primitives, you know, way better, because I think there's way more rich experiences to be created that way. Yeah, completely agree with that. And I would also add that I would go into it if I haven't.Alessio [00:19:30]: I think that's a great point. And everybody just, you know, has a hammer and wants to do tool calling on everything. I think a lot of people do tool calling to do a database query. They don't use resources for it. What are like the, I guess, maybe like pros and cons or like when people should use a tool versus a resource, especially when it comes to like things that do have an API interface, like for a database, you can do a tool that does a SQL query versus when should you do that or a resource instead with the data? Yeah.Justin/David [00:20:00]: The way we separate these is like tools are always meant to be initiated by the model. It's sort of like at the model's discretion that it will like find the right tool and apply it. So if that's the interaction you want as a server developer, where it's like, okay, this, you know, suddenly I've given the LLM the ability to run a SQL queries, for example, that makes sense as a tool. But resources are more flexible, basically. And I think, to be completely honest, the story here is practically a bit complicated today. Because many clients don't support resources yet. But like, I think in an ideal world where all these concepts are fully realized, and there's like full ecosystem support, you would do resources for things like the schemas of your database tables and stuff like that, as a way to like either allow the user to say like, okay, now, you know, cloud, I want to talk to you about this database table. Here it is. Let's have this conversation. Or maybe the particular AI application that you're using, like, you know, could be something agentic, like cloud code. is able to just like agentically look up resources and find the right schema of the database table you're talking about, like both those interactions are possible. But I think like, anytime you have this sort of like, you want to list a bunch of entities, and then read any of them, that makes sense to model as resources. Resources are also, they're uniquely identified by a URI, always. And so you can also think of them as like, you know, sort of general purpose transformers, even like, if you want to support an interaction where a user just like drops a URI in, and then you like automatically figure out how to interpret that, you could use MCP servers to do that interpretation. One of the interesting side notes here, back to the Z example of resources, is that has like a prompt library that you can do, that people can interact with. And we just exposed a set of default prompts that we want everyone to have as part of that prompt library. Yeah, resources for a while so that like, you boot up Zed and Zed will just populate the prompt library from an MCP server, which was quite a cool interaction. And that was, again, a very specific, like, both sides needed to agree upon the URI format and the underlying data format. And but that was a nice and kind of like neat little application of resources. There's also going back to that perspective of like, as an application developer, what are the things that I would want? Yeah. We also applied this thinking to like, you know, like, we can do this, we can do this, we can do this, we can do this. Like what existing features of applications could conceivably be kind of like factored out into MCP servers if you were to take that approach today. And so like basically any IDE where you have like an attachment menu that I think naturally models as resources. It's just, you know, those implementations already existed.swyx [00:22:49]: Yeah, I think the immediate like, you know, when you introduced it for cloud desktop and I saw the at sign there, I was like, oh, yeah, that's what Cursor has. But this is for everyone else. And, you know, I think like that that is a really good design target because it's something that already exists and people can map on pretty neatly. I was actually featuring this chart from Mahesh's workshop that presumably you guys agreed on. I think this is so useful that it should be on the front page of the docs. Like probably should be. I think that's a good suggestion.Justin/David [00:23:19]: Do you want to do you want to do a PR for this? I love it.swyx [00:23:21]: Yeah, do a PR. I've done a PR for just Mahesh's workshop in general, just because I'm like, you know. I know.SPEAKER_03 [00:23:28]: I approve. Yeah.swyx [00:23:30]: Thank you. Yeah. I mean, like, but, you know, I think for me as a developer relations person, I always insist on having a map for people. Here are all the main things you have to understand. We'll spend the next two hours going through this. So some one image that kind of covers all this, I think is pretty helpful. And I like your emphasis on prompts. I would say that it's interesting that like I think, you know, in the earliest early days of like chat GPT and cloud, people. Often came up with, oh, you can't really follow my screen, can you? In the early days of chat of, of chat, GPT and all that, like a lot, a lot of people started like, you know, GitHub for prompts, like we'll do prop manager libraries and, and like those never really took off. And I think something like this is helpful and important. I would say like, I've also seen prompt file from human loop, I think, as, as other ways to standardize how people share prompts. But yeah, I agree that like, there should be. There should be more innovation here. And I think probably people want some dynamicism, which I think you, you afford, you allow for. And I like that you have multi-step that this was, this is the main thing that got me like, like these guys really get it. You know, I think you, you maybe have a published some research that says like, actually sometimes to get, to get the model working the right way, you have to do multi-step prompting or jailbreaking to, to, to behave the way that you want. And so I think prompts are not just single conversations. They're sometimes chains of conversations. Yeah.Alessio [00:25:05]: Another question that I had when I was looking at some server implementations, the server builders kind of decide what data gets eventually returned, especially for tool calls. For example, the Google maps one, right? If you just look through it, they decide what, you know, attributes kind of get returned and the user can not override that if there's a missing one. That has always been my gripe with like SDKs in general, when people build like API wrapper SDKs. And then they miss one parameter that maybe it's new and then I can not use it. How do you guys think about that? And like, yeah, how much should the user be able to intervene in that versus just letting the server designer do all the work?Justin/David [00:25:41]: I think we probably bear responsibility for the Google maps one, because I think that's one of the reference servers we've released. I mean, in general, for things like for tool results in particular, we've actually made the deliberate decision, at least thus far, for tool results to be not like sort of structured JSON data, not matching a schema, really, but as like a text or images or basically like messages that you would pass into the LLM directly. And so I guess the correlation that is, you really should just return a whole jumble of data and trust the LLM to like sort through it and see. I mean, I think we've clearly done a lot of work. But I think we really need to be able to shift and like, you know, extract the information it cares about, because that's what that's exactly what they excel at. And we really try to think about like, yeah, how to, you know, use LLMs to their full potential and not maybe over specify and then end up with something that doesn't scale as LLMs themselves get better and better. So really, yeah, I suppose what should be happening in this example server, which again, will request welcome. It would be great. It's like if all these result types were literally just passed through from the API that it's calling, and then the API would be able to pass through automatically.Alessio [00:26:50]: Thank you for joining us.Alessio [00:27:19]: It's a hard to sign decisions on where to draw the line.Justin/David [00:27:22]: I'll maybe throw AI under the bus a little bit here and just say that Claude wrote a lot of these example servers. No surprise at all. But I do think, sorry, I do think there's an interesting point in this that I do think people at the moment still to mostly still just apply their normal software engineering API approaches to this. And I think we're still need a little bit more relearning of how to build something for LLMs and trust them, particularly, you know, as they are getting significantly better year to year. Right. And I think, you know, two years ago, maybe that approach would have been very valid. But nowadays, just like just throw data at that thing that is really good at dealing with data is a good approach to this problem. And I think it's just like unlearning like 20, 30, 40 years of software engineering practices that go a little bit into this to some degree. If I could add to that real quickly, just one framing as well for MCP is thinking in terms of like how crazily fast AI is advancing. I mean, it's exciting. It's also scary. Like thinking, us thinking that like the biggest bottleneck to, you know, the next wave of capabilities for models might actually be their ability to like interact with the outside world to like, you know, read data from outside data sources or like take stateful actions. Working at Anthropic, we absolutely care about doing that. Safely and with the right control and alignment measures in place and everything. But also as AI gets better, people will want that. That'll be key to like becoming productive with AI is like being able to connect them up to all those things. So MCP is also sort of like a bet on the future and where this is all going and how important that will be.Alessio [00:29:05]: Yeah. Yeah, I would say any API attribute that says formatted underscore should kind of be gone and we should just get the raw data from all of them. Because why, you know, why are you formatting? For me, the, the model is definitely smart enough to format an address. So I think that should go to the end user.swyx [00:29:23]: Yeah. I have, I think Alessio is about to move on to like server implementation. I wanted to, I think we were talking, we're still talking about sort of MCP design and goals and intentions. And we've, I think we've indirectly identified like some problems that MCP is really trying to address. But I wanted to give you the spot to directly take on MCP versus open API, because I think obviously there's a, this is a top question. I wanted to sort of recap everything we just talked about and give people a nice little segment that, that people can say, say, like, this is a definitive answer on MCP versus open API.Justin/David [00:29:56]: Yeah, I think fundamentally, I mean, open API specifications are a very great tool. And like I've used them a lot in developing APIs and consumers of APIs. I think fundamentally, or we think that they're just like too granular for what you want to do with LLMs. Like they don't express higher level AI specific concepts like this whole mental model. Yeah. But we've talked about with the primitives of MCP and thinking from the perspective of the application developer, like you don't get any of that when you encode this information into an open API specification. So we believe that models will benefit more from like the purpose built or purpose design tools, resources, prompts, and the other primitives than just kind of like, here's our REST API, go wild. I do think there, there's another aspect. I think that I'm not an open API expert, so I might, everything might not be perfectly accurate. But I do think that we're... Like there's been, and we can talk about this a bit more later. There's a deliberate design decision to make the protocol somewhat stateful because we do really believe that AI applications and AI like interactions will become inherently more stateful and that we're the current state of like, like need for statelessness is more a temporary point in time that will, you know, to some degree that will always exist. But I think like more statefulness will become increasingly more popular, particularly when you think about additional modalities that go beyond just pure text-based, you know, interactions with models, like it might be like video, audio, whatever other modalities exist and out there already. And so I do think that like having something a bit more stateful is just inherently useful in this interaction pattern. I do think they're actually more complimentary open API and MCP than if people wanted to make it out. Like people look. For these, like, you know, A versus B and like, you know, have, have all the, all the developers of these things go in a room and fist fight it out. But that's rarely what's going on. I think it's actually, they're very complimentary and they have their little space where they're very, very strong. And I think, you know, just use the best tool for the job. And if you want to have a rich interaction between an AI application, it's probably like, it's probably MCP. That's the right choice. And if, if you want to have like an API spec somewhere that is very easy and like a model can read. And to interpret, and that's what, what worked for you, then open API is the way to go. One more thing to add here is that we've already seen people, I mean, this happened very early. People in the community built like bridges between the two as well. So like, if what you have is an open API specification and no one's, you know, building a custom MCP server for it, there are already like translators that will take that and re-expose it as MCP. And you could do the other direction too. Awesome.Alessio [00:32:43]: Yeah. I think there's the other side of MCPs that people don't talk as much. Okay. I think there's the other side of MCPs that people don't talk as much about because it doesn't go viral, which is building the servers. So I think everybody does the tweets about like connect the cloud desktop to XMCP. It's amazing. How would you guys suggest people start with building servers? I think the spec is like, so there's so many things you can do that. It's almost like, how do you draw the line between being very descriptive as a server developer versus like going back to our discussion before, like just take the data and then let them auto manipulate it later. Do you have any suggestions for people?Justin/David [00:33:16]: I. I think there, I have a few suggestions. I think that one of the best things I think about MCP and something that we got right very early is that it's just very, very easy to build like something very simple that might not be amazing, but it's pretty, it's good enough because models are very good and get this going within like half an hour, you know? And so I think that the best part is just like pick the language of, you know, of your choice that you love the most, pick the SDK for it, if there's an SDK for it, and then just go build a tool of the thing that matters to you personally. And that you want to use. You want to see the model like interact with, build the server, throw the tool in, don't even worry too much about the description just yet, like do a bit of like, write your little description as you think about it and just give it to the model and just throw it to standard IO protocol transport wise into like an application that you like and see it do things. And I think that's part of the magic that, or like, you know, empowerment and magic for developers to get so quickly to something that the model does. Or something that you care about. That I think really gets you going and gets you into this flow of like, okay, I see this thing can do cool things. Now I go and, and can expand on this and now I can go and like really think about like, which are the different tools I want, which are the different raw resources and prompts I want. Okay. Now that I have that. Okay. Now do I, what do my evals look like for how I want this to go? How do I optimize my prompts for the evals using like tools like that? This is infinite depth so that you can do. But. Okay. Just start. As simple as possible and just go build a server in like half an hour in the language of your choice and how the model interacts with the things that matter to you. And I think that's where the fun is at. And I think people, I think a lot of what MCP makes great is it just adds a lot of fun to the development piece to just go and have models do things quickly. I also, I'm quite partial, again, to using AI to help me do the coding. Like, I think even during the initial development process, we realized it was quite easy to basically just take all the SDK code. Again, you know, what David suggested, like, you know, pick the language you care about, and then pick the SDK. And once you have that, you can literally just drop the whole SDK code into an LLM's context window and say, okay, now that you know MCP, build me a server that does that. This, this, this. And like, the results, I think, are astounding. Like, I mean, it might not be perfect around every single corner or whatever. And you can refine it over time. But like, it's a great way to kind of like one shot something that basically does what you want, and then you can iterate from there. And like David said, there has been a big emphasis from the beginning on like making servers as easy and simple to build as possible, which certainly helps with LLMs doing it too. We often find that like, getting started is like, you know, 100, 200 lines of code in the last couple of years. It's really quite easy. Yeah. And if you don't have an SDK, again, give the like, give the subset of the spec that you care about to the model, and like another SDK and just have it build you an SDK. And it usually works for like, that subset. Building a full SDK is a different story. But like, to get a model to tool call in Haskell or whatever, like language you like, it's probably pretty straightforward.swyx [00:36:32]: Yeah. Sorry.Alessio [00:36:34]: No, I was gonna say, I co-hosted a hackathon at the AGI house. I'm a personal agent, and one of the personal agents somebody built was like an MCP server builder agent, where they will basically put the URL of the API spec, and it will build an MCP server for them. Do you see that today as kind of like, yeah, most servers are just kind of like a layer on top of an existing API without too much opinion? And how, yeah, do you think that's kind of like how it's going to be going forward? Just like AI generated, exposed to API that already exists? Or are we going to see kind of like net new MCP experiences that you... You couldn't do before?Justin/David [00:37:10]: I think, go for it. I think both, like, I, I think there, there will always be value in like, oh, I have, you know, I have my data over here, and I want to use some connector to bring it into my application over here. That use case will certainly remain. I think, you know, this, this kind of goes back to like, I think a lot of things today are maybe defaulting to tool use when some of the other primitives would be maybe more appropriate over time. And so it could still be that connector. It could still just be that sort of adapter layer, but could like actually adapt it onto different primitives, which is one, one way to add more value. But then I also think there's plenty of opportunity for use cases, which like do, you know, or for MCP servers that kind of do interesting things in and out themselves and aren't just adapters. Some of the earliest examples of this were like, you know, the memory MCP server, which gives the LLM the ability to remember things across conversations or like someone who's a close coworker built the... I shouldn't have said that, not a close coworker. Someone. Yeah. Built the sequential thinking MCP server, which gives a model the ability to like really think step-by-step and get better at its reasoning capabilities. This is something where it's like, it really isn't integrating with anything external. It's just providing this sort of like way of thinking for a model.Justin/David [00:38:27]: I guess either way though, I think AI authorship of the servers is totally possible. Like I've had a lot of success in prompting, just being like, Hey, I want to build an MCP server that like does this thing. And even if this thing is not. Adapting some other API, but it's doing something completely original. It's usually able to figure that out too. Yeah. I do. I do think that the, to add to that, I do think that a good part of, of what MCP servers will be, will be these like just API wrapper to some degree. Um, and that's good to be valid because that works and it gets you very, very far. But I think we're just very early, like in, in exploring what you can do. Um, and I think as client support for like certain primitives get better, like we can talk about sampling. I'm playing with my favorite topic and greatest frustration at the same time. Um, I think you can just see it very easily see like way, way, way richer experiences and we have, we have built them internally for as prototyping aspects. And I think you see some of that in the community already, but there's just, you know, things like, Hey, summarize my, you know, my, my, my, my favorite subreddits for the morning MCP server that nobody has built yet, but it's very easy to envision. And the protocol can totally do this. And these are like slightly richer experiences. And I think as people like go away from like the, oh, I just want to like, I'm just in this new world where I can hook up the things that matter to me, to the LLM, to like actually want a real workflow, a real, like, like more richer experience that I, I really want exposed to the model. I think then you will see these things pop up, but again, that's a, there's a little bit of a chicken and egg problem at the moment with like what a client supported versus, you know, what servers like authors want to do. Yeah.Alessio [00:40:10]: That, that, that was. That's kind of my next question on composability. Like how, how do you guys see that? Do you have plans for that? What's kind of like the import of MCPs, so to speak, into another MCP? Like if I want to build like the subreddit one, there's probably going to be like the Reddit API, uh, MCP, and then the summarization MCP. And then how do I, how do I do a super MCP?Justin/David [00:40:33]: Yeah. So, so this is an interesting topic and I think there, um, so there, there are two aspects to it. I think that the one aspect is like, how can I build something? I think agentically that you requires an LLM call and like a one form of fashion, like for summarization or so, but I'm staying model independent and for that, that's where like part of this by directionality comes in, in this more rich experience where we do have this facility for servers to ask the client again, who owns the LLM interaction, right? Like we talk about cursor, who like runs the, the, the loop with the LLM for you there that for the server author to ask the client for a completion. Um, and basically have it like summarize something for the server and return it back. And so now what model summarizes this depends on which one you have selected in cursor and not depends on what the author brings. The author doesn't bring an SDK. It doesn't have, you had an API key. It's completely model independent, how you can build this. There's just one aspect to that. The second aspect to building richer, richer systems with MCP is that you can easily envision an MCP server that serves something to like something like cursor or win server. For a cloud desktop, but at the same time, also is an MCP client at the same time and itself can use MCP servers to create a rich experience. And now you have a recursive property, which we actually quite carefully in the design principles, try to retain. You, you know, you see it all over the place and authorization and other aspects, um, to the spec that we retain this like recursive pattern. And now you can think about like, okay, I have, I have this little bundle of applications, both a server and a client. And I can add. Add these in chains and build basically graphs like, uh, DAGs out of MCP servers, um, uh, that can just richly interact with each other. A agentic MCP server can also use the whole ecosystem of MCP servers available to themselves. And I think that's a really cool environment, cool thing you can do. And people have experimented with this. And I think you see hopefully more of this, particularly when you think about like auto-selecting, auto-installing, there's a bunch of these things you can do that make, uh, make a really fun experience. I, I think practically there are some niceties we still need to add to the SDKs to make this really simple and like easy to execute on like this kind of recursive MCP server that is also a client or like kind of multiplexing together the behaviors of multiple MCP servers into one host, as we call it. These are things we definitely want to add. We haven't been able to yet, but like, uh, I think that would go some way to showcasing these things that we know are already possible, but not necessarily taken up that much yet. Okay.swyx [00:43:08]: This is, uh, very exciting. And very, I'm sure, I'm sure a lot of people get very, very, uh, a lot of ideas and inspiration from this. Is an MCP server that is also a client, is that an agent?Justin/David [00:43:19]: What's an agent? There's a lot of definitions of agents.swyx [00:43:22]: Because like you're, in some ways you're, you're requesting something and it's going off and doing stuff that you don't necessarily know. There's like a layer of abstraction between you and the ultimate raw source of the data. You could dispute that. Yeah. I just, I don't know if you have a hot take on agents.Justin/David [00:43:35]: I do think, I do think that you can build an agent that way. For me, I think you need to define the difference between. An MCP server plus client that is just a proxy versus an agent. I think there's a difference. And I think that difference might be in, um, you know, for example, using a sample loop to create a more richer experience to, uh, to, to have a model call tools while like inside that MCP server through these clients. I think then you have a, an actual like agent. Yeah. I do think it's very simple to build agents that way. Yeah. I think there are maybe a few paths here. Like it definitely feels like there's some relationship. Between MCP and agents. One possible version is like, maybe MCP is a great way to represent agents. Maybe there are some like, you know, features or specific things that are missing that would make the ergonomics of it better. And we should make that part of MCP. That's one possibility. Another is like, maybe MCP makes sense as kind of like a foundational communication layer for agents to like compose with other agents or something like that. Or there could be other possibilities entirely. Maybe MCP should specialize and narrowly focus on kind of the AI application side. And not as much on the agent side. I think it's a very live question and I think there are sort of trade-offs in every direction going back to the analogy of the God box. I think one thing that we have to be very careful about in designing a protocol and kind of curating or shepherding an ecosystem is like trying to do too much. I think it's, it's a very big, yeah, you know, you don't want a protocol that tries to do absolutely everything under the sun because then it'll be bad at everything too. And so I think the key question, which is still unresolved is like, to what degree are agents. Really? Really naturally fitting in to this existing model and paradigm or to what degree is it basically just like orthogonal? It should be something.swyx [00:45:17]: I think once you enable two way and once you enable client server to be the same and delegation of work to another MCP server, it's definitely more agentic than not. But I appreciate that you keep in mind simplicity and not trying to solve every problem under the sun. Cool. I'm happy to move on there. I mean, I'm going to double click on a couple of things that I marked out because they coincide with things that we wanted to ask you. Anyway, so the first one is, it's just a simple, how many MCP things can one implementation support, you know, so this is the, the, the sort of wide versus deep question. And, and this, this is direct relevance to the nesting of MCPs that we just talked about in April, 2024, when, when Claude was launching one of its first contexts, the first million token context example, they said you can support 250 tools. And in a lot of cases, you can't do that. You know, so to me, that's wide in, in the sense that you, you don't have tools that call tools. You just have the model and a flat hierarchy of tools, but then obviously you have tool confusion. It's going to happen when the tools are adjacent, you call the wrong tool. You're going to get the bad results, right? Do you have a recommendation of like a maximum number of MCP servers that are enabled at any given time?Justin/David [00:46:32]: I think be honest, like, I think there's not one answer to this because to some extent, it depends on the model that you're using. To some extent, it depends on like how well the tools are named and described for the model and stuff like that to avoid confusion. I mean, I think that the dream is certainly like you just furnish all this information to the LLM and it can make sense of everything. This, this kind of goes back to like the, the future we envision with MCP is like all this information is just brought to the model and it decides what to do with it. But today the reality or the practicalities might mean that like, yeah, maybe you, maybe in your client application, like the AI application, you do some fill in the blanks. Maybe you do some filtering over the tool set or like maybe you, you run like a faster, smaller LLM to like filter to what's most relevant and then only pass those tools to the bigger model. Or you could use an MCP server, which is a proxy to other MCP servers and does some filtering at that level or something like that. I think hundreds, as you referenced, is still a fairly safe bet, at least for Claude. I can't speak to the other models, but yeah, I don't know. I think over time we should just expect this to get better. So we're wary of like constraining anything and preventing that. Sort of long. Yeah, and obviously it highly, it highly depends on the overlap of the description, right? Like if you, if you have like very separate servers that do very separate things and the tools have very clear unique names, very clear, well-written descriptions, you know, your mileage might be more higher than if you have a GitLab and a GitHub server at the same time in your context. And, and then the overlap is quite significant because they look very similar to the model and confusion becomes easier. There's different considerations too. Depending on the AI application, if you're, if you're trying to build something very agentic, maybe you are trying to minimize the amount of times you need to go back to the user with a question or, you know, minimize the amount of like configurability in your interface or something. But if you're building other applications, you're building an IDE or you're building a chat application or whatever, like, I think it's totally reasonable to have affordances that allow the user to say like, at this moment, I want this feature set or at this different moment, I want this different feature set or something like that. And maybe not treat it as like always on. The full list always on all the time. Yeah.swyx [00:48:42]: That's where I think the concepts of resources and tools get to blend a little bit, right? Because now you're saying you want some degree of user control, right? Or application control. And other times you want the model to control it, right? So now we're choosing just subsets of tools. I don't know.Justin/David [00:49:00]: Yeah, I think it's a fair point or a fair concern. I guess the way I think about this is still like at the end of the day, and this is a core MCP design principle is like, ultimately, the concept of a tool is not a tool. It's a client application, and by extension, the user. Ultimately, they should be in full control of absolutely everything that's happening via MCP. When we say that tools are model controlled, what we really mean is like, tools should only be invoked by the model. Like there really shouldn't be an application interaction or a user interaction where it's like, okay, as a user, I now want you to use this tool. I mean, occasionally you might do that for prompting reasons, but like, I think that shouldn't be like a UI affordance. But I think the client application or the user deciding to like filter out the user, it's not a tool. I think the client application or the user deciding to like filter out things that MCP servers are offering, totally reasonable, or even like transform them. Like you could imagine a client application that takes tool descriptions from an MCP server and like enriches them, makes them better. We really want the client applications to have full control in the MCP paradigm. That in addition, though, like I think there, one thing that's very, very early in my thinking is there might be a addition to the protocol where you want to give the server author the ability to like logically group certain primitives together, potentially. Yeah. To inform that, because they might know some of these logical groupings better, and that could like encompasses prompts, resources, and tools at the same time. I mean, personally, we can have a design discussion on there. I mean, personally, my take would be that those should be separate MCP servers, and then the user should be able to compose them together. But we can figure it out.Alessio [00:50:31]: Is there going to be like a MCP standard library, so to speak, of like, hey, these are like the canonical servers, do not build this. We're just going to take care of those. And those can be maybe the building blocks that people can compose. Or do you expect people to just rebuild their own MCP servers for like a lot of things?Justin/David [00:50:49]: I think we will not be prescriptive in that sense. I think there will be inherently, you know, there's a lot of power. Well, let me rephrase it. Like, I have a long history in open source, and I feel the bizarre approach to this problem is somewhat useful, right? And I think so that the best and most interesting option wins. And I don't think we want to be very prescriptive. I will definitely foresee, and this already exists, that there will be like 25 GitHub servers and like 25, you know, Postgres servers and whatnot. And that's all cool. And that's good. And I think they all add in their own way. But effectively, eventually, over months or years, the ecosystem will converge to like a set of very widely used ones who basically, I don't know if you call it winning, but like that will be the most used ones. And I think that's completely fine. Because being prescriptive about this, I don't think it's any useful, any use. I do think, of course, that there will be like MCP servers, and you see them already that are driven by companies for their products. And, you know, they will inherently be probably the canonical implementation. Like if you want to work with Cloudflow workers and use an MCP server for that, you'll probably want to use the one developed by Cloudflare. Yeah. I think there's maybe a related thing here, too, just about like one big thing worth thinking about. We don't have any like solutions completely ready to go. It's this question of like trust or like, you know, vetting is maybe a better word. Like, how do you determine which MCP servers are like the kind of good and safe ones to use? Regardless of if there are any implementations of GitHub MCP servers, that could be totally fine. But you want to make sure that you're not using ones that are really like sus, right? And so trying to think about like how to kind of endow reputation or like, you know, if hypothetically. Anthropic is like, we've vetted this. It meets our criteria for secure coding or something. How can that be reflected in kind of this open model where everyone in the ecosystem can benefit? Don't really know the answer yet, but that's very much top of mind.Alessio [00:52:49]: But I think that's like a great design choice of MCPs, which is like language agnostic. Like already, and there's not, to my knowledge, an Anthropic official Ruby SDK, nor an OpenAI SDK. And Alex Roudal does a great job building those. But now with MCPs is like. You don't actually have to translate an SDK to all these languages. You just do one, one interface and kind of bless that interface as, as Anthropic. So yeah, that was, that was nice.swyx [00:53:18]: I have a quick answer to this thing. So like, obviously there's like five or six different registries already popped up. You guys announced your official registry that's gone away. And a registry is very tempting to offer download counts, likes, reviews, and some kind of trust thing. I think it's kind of brittle. Like no matter what kind of social proof or other thing you can, you can offer, the next update can compromise a trusted package. And actually that's the one that does the most damage, right? So abusing the trust system is like setting up a trust system creates the damage from the trust system. And so I actually want to encourage people to try out MCP Inspector because all you got to do is actually just look at the traffic. And like, I think that's, that goes for a lot of security issues.Justin/David [00:54:03]: Yeah, absolutely. Cool. And then I think like that's very classic, just supply chain problem that like all registries effectively have. And the, you know, there are different approaches to this problem. Like you can take the Apple approach and like vet things and like have like an army of, of both automated system and review teams to do this. And then you effectively build an app store, right? That's, that's one approach to this type of problem. It kind of works in, you know, in a very set, certain set of ways. But I don't think it works in an open source kind of ecosystem for which you always have a registry kind of approach, like similar to MPM and packages and PiPi.swyx [00:54:36]: And they all have inherently these, like these, these supply chain attack problems, right? Yeah, yeah, totally. Quick time check. I think we're going to go for another like 20, 25 minutes. Is that okay for you guys? Okay, awesome. Cool. I wanted to double click, take the time. So I'm going to sort of, we previewed a little bit on like the future coming stuff. So I want to leave the future coming stuff to the end, like registry, the, the, the stateless servers and remote servers, all the other stuff. But I wanted to double click a little bit. A little bit more on the launch, the core servers that are part of the official repo. And some of them are special ones, like the, like the ones we already talked about. So let me just pull them up already. So for example, you mentioned memory, you mentioned sequential thinking. And I think I really, really encourage people should look at these, what I call special servers. Like they're, they're not normal servers in the, in the sense that they, they wrap some API and it's just easier to interact with those than to work at the APIs. And so I'll, I'll highlight the, the memory one first, just because like, I think there are, there are a few memory startups, but actually you don't need them if you just use this one. It's also like 200 lines of code. It's super simple. And, and obviously then if you need to scale it up, you should probably do some, some more battle tested thing. But if you're interested, if you're just introducing memory, I think this is a really good implementation. I don't know if there's like special stories that you want to highlight with, with some of these.Justin/David [00:56:00]: I think, no, I don't, I don't think there's special stories. I think a lot of these, not all of them, but a lot of them originated from that hackathon that I mentioned before, where folks got excited about the idea of MCP. People internally inside Anthropik who wanted to have memory or like wanted to play around with the idea could quickly now prototype something using MCP in a way that wasn't possible before. Someone who's not like, you know, you don't have to become the, the end to end expert. You don't have access. You don't have to have access to this. Like, you know. You don't have to have this private, you know, proprietary code base. You can just now extend cloud with this memory capability. So that's how a lot of these came about. And then also just thinking about like, you know, what is the breadth of functionality that we want to demonstrate at launch?swyx [00:56:47]: Totally. And I think that is partially why it made your launch successful because you launch with a sufficiently spanning set of here's examples and then people just copy paste and expand from there. I would also highligh
HTML All The Things - Web Development, Web Design, Small Business
Choosing the right code editor can make or break a web developer's workflow. In this episode, we dive into the Top 5 Code Editors for Web Developers—exploring their strengths, quirks, and everything in between. From the widely-loved Visual Studio Code to the blazing-fast newcomer Zed, we discuss which editors could suit your coding style. Whether you're a fan of Vim's keyboard mastery, WebStorm's all-in-one features, or experimenting with modern tools like Cursor, there's something here for everyone. Tune in to find the perfect fit for your development journey! Show Notes: https://www.htmlallthethings.com/podcasts/top-5-code-editors-for-web-developers
In this vibrant episode of The *(Relate)able Podcast, hosts Sherween and Fiona embark on an unforgettable journey to Guadeloupe to immerse themselves in the island's renowned Carnival. Captivated by the profound spiritual essence of the traditional masquerade ("mas"), they share their personal experiences and reflections. Sherween candidly discusses the physical challenges she faced, having underestimated the demands of the lively marches, while Fiona's dedicated training regimen paid off, allowing her to fully embrace each day's festivities. Both hosts have returned with a deep admiration and love for Guadeloupe's rich cultural heritage. In our "Under the Mango Tree" segment, listeners are treated to the enchanting sounds of the conch shell and drumming by the VIM mas band. Additionally, all three hosts find joy in reading and responding to heartfelt comments from our YouTube channel and other listener feedback.KarataGuadeloupe Tourism BoardAVPAGMarie GalantVoukoumSupport this show http://supporter.acast.com/relateable. Hosted on Acast. See acast.com/privacy for more information.
Send us a textIn this episode of CareTalk, John Driscoll sits down with Oron Afek, CEO of Vim, to discuss how Vim is transforming healthcare by creating a smarter, more connected ecosystem for doctors and patients. Oron shares his entrepreneurial journey, from his early days in the Israeli military to building a healthcare platform that integrates seamlessly with electronic health records (EHRs) to improve clinical decision-making. Learn how Vim's technology is helping doctors make better decisions at the point of care, streamlining workflows, and driving better patient outcomes, all while empowering developers to build innovative healthcare solutions.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
DShield Traffic Analysis using ELK The "DShield SIEM" includes an ELK dashboard as part of the Honeypot. Learn how to find traffic of interest with this tool. https://isc.sans.edu/diary/DShield%20Traffic%20Analysis%20using%20ELK/31742 Zen and the Art of Microcode Hacking Google released details, including a proof of concept exploit, showing how to take advantage of the recently patched AMD microcode vulnerability https://bughunters.google.com/blog/5424842357473280/zen-and-the-art-of-microcode-hacking CVE-2024-56161 VIM Vulnerability An attacker may execute arbitrary code by tricking a user to open a crafted tar file in VIM https://github.com/vim/vim/security/advisories/GHSA-wfmf-8626-q3r3 Snil Mail Fake Ransom Note A copy cat group is impersonating ransomware actors. The group sends snail mail to company executives claiming to have stolen company data and threatening to leak it unless a payment is made. https://www.guidepointsecurity.com/blog/snail-mail-fail-fake-ransom-note-campaign-preys-on-fear/
ZFS Storage Fault Management, FreeBSD 14.2-RELEASE Announcement, I feel that NAT is inevitable even with IPv6, Spell checking in Vim, OpenBSD Memory Conflict Messages, The Biggest Shell Programs in the World, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines ZFS Storage Fault Management (https://klarasystems.com/articles/zfs-storage-fault-management-linux/?utm_source=BSD%20Now&utm_medium=Podcast) FreeBSD 14.2-RELEASE Announcement (https://www.freebsd.org/releases/14.2R/announce/) News Roundup I feel that NAT is inevitable even with IPv6 (https://utcc.utoronto.ca/~cks/space/blog/tech/IPv6AndStillHavingNAT) Spell checking in Vim (https://www.tumfatig.net/2024/spell-checking-in-vim/) OpenBSD Memory Conflict Messages (https://utcc.utoronto.ca/~cks/space/blog/unix/OpenBSDMemoryConflictMessages) The Biggest Shell Programs in the World (https://github.com/oils-for-unix/oils/wiki/The-Biggest-Shell-Programs-in-the-World) Beastie Bits The Connectivity of Things: Network Cultures since 1832 (https://direct.mit.edu/books/oa-monograph/5866/The-Connectivity-of-ThingsNetwork-Cultures-since) Initial list of 21 EuroBSDcon 2024 videos released (https://www.undeadly.org/cgi?action=article;sid=20241130184249) -current now has more flexible performance policy (https://www.undeadly.org/cgi?action=article;sid=20241129093132) OpenBSD 5.1 on Sun Ultra 5 (https://eggflix.foolbazar.eu/w/fa211a4f-6984-4c03-a6d2-b8c329d9459d) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/592/feedback/Phillip%20-%20regressions.md Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)
Monitoring your house with security cameras, automating a 3D printer, yet another note taking app, a great FOSS digital audio workstation, browser automation, converting Office documents to markdown, markdown in Vim, and why we think Raspberry Pi OS shouldn't change its default desktop environment. Discoveries motion & frigate Octoprint PSU control with Home Assistant... Read More