Occupation of loading and unloading ships
POPULARITY
Categories
¿Te has rendido alguna vez intentando programar en movilidad? Te confieso que lo de programar en la tablet Android no me estaba funcionando, y la razón era sencilla: pereza y falta de un entorno coherente. En el episodio de hoy, te cuento cómo he solucionado este problema de raíz instalando Code Server en un servidor remoto.A lo largo de este audio, exploramos los desafíos de mantener múltiples entornos de desarrollo y por qué la fragmentación mata tu creatividad. Te detallo el paso a paso de mi configuración técnica: desde la creación de una imagen de Docker personalizada hasta la integración de herramientas modernas escritas en Rust (como Bat y LSD) que mejoran la experiencia en la terminal.Lo que aprenderás en este episodio: Por qué un servidor de desarrollo es superior a las instalaciones locales en tablets. Cómo configurar Docker Compose para desplegar Code Server con persistencia real. Seguridad avanzada: Uso de Traefik, Pocket ID y geobloqueo para proteger tu código. Trucos de configuración para VS Code en el navegador: Mapeo de teclas, evitar el conflicto con la tecla Escape y el uso de la fuente JetBrains Mono. Productividad máxima con los modos de Vim integrados en el flujo web. Cómo transformar Code Server en una PWA para eliminar las distracciones del navegador en Android.No se trata solo de tecnología, sino de eliminar las fricciones que nos impiden avanzar en nuestros proyectos. Si quieres saber cómo convertir cualquier dispositivo con un navegador en tu estación de trabajo principal, no te pierdas este episodio.Cronología del episodio:00:00:00 El fracaso de programar en tablet (y por qué)00:01:43 La solución definitiva: Code Server00:02:12 El problema de los entornos fragmentados00:03:53 Mi imagen personalizada de Docker para Code Server00:05:04 Herramientas imprescindibles en Rust (Bat, LSD, SD)00:06:23 Configuración de Rust y herramientas de desarrollo00:07:05 Persistencia y Docker Compose00:08:06 Seguridad: Traefik, Pocket ID y Geobloqueo00:10:03 Optimizando VS Code para el navegador00:11:13 Sincronización y persistencia de extensiones00:12:43 Estética y tipografía (Ayu Dark y JetBrains Mono)00:13:59 El poder de Vim dentro de Code Server00:15:51 Cómo usar Code Server como una PWA en Android00:17:04 Teclado físico: El accesorio obligatorio00:18:50 Conclusiones y futuro del desarrollo remotoRecuerda que puedes encontrar todas las notas, el repositorio y los enlaces mencionados en atareao.es. Si te gusta el contenido, una valoración en Spotify o Apple Podcast ayuda muchísimo a seguir difundiendo el mundo Linux y el Open Source.Más información y enlaces en las notas del episodio
Emmanuel et Guillaume discutent de divers sujets liés à la programmation, notamment les systèmes de fichiers en Java, le Data Oriented Programming, les défis de JPA avec Kotlin, et les nouvelles fonctionnalités de Quarkus. Ils explorent également des sujets un peu fous comme la création de datacenters dans l'espace. Pas mal d'architecture aussi. Enregistré le 13 février 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-337.mp3 ou en vidéo sur YouTube. News Langages Comment implémenter un file system en Java https://foojay.io/today/bootstrapping-a-java-file-system/ Créer un système de fichiers Java personnalisé avec NIO.2 pour des usages variés (VCS, archives, systèmes distants). Évolution Java: java.io.File (1.0) -> NIO (1.4) -> NIO.2 (1.7) pour personnalisation via FileSystem. Recommander conception préalable; API Java est orientée POSIX. Composants clés à considérer: Conception URI (scheme unique, chemin). Gestion de l'arborescence (BD, métadonnées, efficacité). Stockage binaire (emplacement, chiffrement, versions). Minimum pour démarrer (4 composants): Implémenter Path (représente fichier/répertoire). Étendre FileSystem (instance du système). Étendre FileSystemProvider (moteur, enregistré par scheme). Enregistrer FileSystemProvider via META-INF/services. Étapes suivantes: Couche BD (arborescence), opérations répertoire/fichier de base, stockage, tests. Processus long et exigeant, mais gratifiant. Un article de brian goetz sur le futur du data oriented programming en Java https://openjdk.org/projects/amber/design-notes/beyond-records Le projet Amber de Java introduit les "carrier classes", une évolution des records qui permet plus de flexibilité tout en gardant les avantages du pattern matching et de la reconstruction Les records imposent des contraintes strictes (immutabilité, représentation exacte de l'état) qui limitent leur usage pour des classes avec état muable ou dérivé Les carrier classes permettent de déclarer une state description complète et canonique sans imposer que la représentation interne corresponde exactement à l'API publique Le modificateur "component" sur les champs permet au compilateur de dériver automatiquement les accesseurs pour les composants alignés avec la state description Les compact constructors sont généralisés aux carrier classes, générant automatiquement l'initialisation des component fields Les carrier classes supportent la déconstruction via pattern matching comme les records, rendant possible leur usage dans les instanceof et switch Les carrier interfaces permettent de définir une state description sur une interface, obligeant les implémentations à fournir les accesseurs correspondants L'extension entre carrier classes est possible, avec dérivation automatique des appels super() quand les composants parent sont subsumés par l'enfant Les records deviennent un cas particulier de carrier classes avec des contraintes supplémentaires (final, extends Record, component fields privés et finaux obligatoires) L'évolution compatible des records est améliorée en permettant l'ajout de composants en fin de liste et la déconstruction partielle par préfixe Comment éviter les pièges courants avec JPA et Kotlin - https://blog.jetbrains.com/idea/2026/01/how-to-avoid-common-pitfalls-with-jpa-and-kotlin/ JPA est une spécification Java pour la persistance objet-relationnel, mais son utilisation avec Kotlin présente des incompatibilités dues aux différences de conception des deux langages Les classes Kotlin sont finales par défaut, ce qui empêche la création de proxies par JPA pour le lazy loading et les opérations transactionnelles Le plugin kotlin-jpa génère automatiquement des constructeurs sans argument et rend les classes open, résolvant les problèmes de compatibilité Les data classes Kotlin ne sont pas adaptées aux entités JPA car elles génèrent equals/hashCode basés sur tous les champs, causant des problèmes avec les relations lazy L'utilisation de lateinit var pour les relations peut provoquer des exceptions si on accède aux propriétés avant leur initialisation par JPA Les types non-nullables Kotlin peuvent entrer en conflit avec le comportement de JPA qui initialise les entités avec des valeurs null temporaires Le backing field direct dans les getters/setters personnalisés peut contourner la logique de JPA et casser le lazy loading IntelliJ IDEA 2024.3 introduit des inspections pour détecter automatiquement ces problèmes et propose des quick-fixes L'IDE détecte les entités finales, les data classes inappropriées, les problèmes de constructeurs et l'usage incorrect de lateinit Ces nouvelles fonctionnalités aident les développeurs à éviter les bugs subtils liés à l'utilisation de JPA avec Kotlin Librairies Guide sur MapStruct @IterableMapping - https://www.baeldung.com/java-mapstruct-iterablemapping MapStruct est une bibliothèque Java pour générer automatiquement des mappers entre beans, l'annotation @IterableMapping permet de configurer finement le mapping de collections L'attribut dateFormat permet de formater automatiquement des dates lors du mapping de listes sans écrire de boucle manuelle L'attribut qualifiedByName permet de spécifier quelle méthode custom appliquer sur chaque élément de la collection à mapper Exemple d'usage : filtrer des données sensibles comme des mots de passe en mappant uniquement certains champs via une méthode dédiée L'attribut nullValueMappingStrategy permet de contrôler le comportement quand la collection source est null (retourner null ou une collection vide) L'annotation fonctionne pour tous types de collections Java (List, Set, etc.) et génère le code de boucle nécessaire Possibilité d'appliquer des formats numériques avec numberFormat pour convertir des nombres en chaînes avec un format spécifique MapStruct génère l'implémentation complète du mapper au moment de la compilation, éliminant le code boilerplate L'annotation peut être combinée avec @Named pour créer des méthodes de mapping réutilisables et nommées Le mapping des collections supporte les conversions de types complexes au-delà des simples conversions de types primitifs Accès aux fichiers Samba depuis Java avec JCIFS - https://www.baeldung.com/java-samba-jcifs JCIFS est une bibliothèque Java permettant d'accéder aux partages Samba/SMB sans monter de lecteur réseau, supportant le protocole SMB3 on pense aux galériens qui doivent se connecter aux systèmes dit legacy La configuration nécessite un contexte CIFS (CIFSContext) et des objets SmbFile pour représenter les ressources distantes L'authentification se fait via NtlmPasswordAuthenticator avec domaine, nom d'utilisateur et mot de passe La bibliothèque permet de lister les fichiers et dossiers avec listFiles() et vérifier leurs propriétés (taille, date de modification) Création de fichiers avec createNewFile() et de dossiers avec mkdir() ou mkdirs() pour créer toute une arborescence Suppression via delete() qui peut parcourir et supprimer récursivement des arborescences entières Copie de fichiers entre partages Samba avec copyTo(), mais impossibilité de copier depuis le système de fichiers local Pour copier depuis le système local, utilisation des streams SmbFileInputStream et SmbFileOutputStream Les opérations peuvent cibler différents serveurs Samba et différents partages (anonymes ou protégés par mot de passe) La bibliothèque s'intègre dans des blocs try-with-resources pour une gestion automatique des ressources Quarkus 3.31 - Support complet Java 25, nouveau packaging Maven et Panache Next - https://quarkus.io/blog/quarkus-3-31-released/ Support complet de Java 25 avec images runtime et native Nouveau packaging Maven de type quarkus avec lifecycle optimisé pour des builds plus rapides voici un article complet pour plus de detail https://quarkus.io/blog/building-large-applications/ Introduction de Panache Next, nouvelle génération avec meilleure expérience développeur et API unifiée ORM/Reactive Mise à jour vers Hibernate ORM 7.2, Reactive 3.2, Search 8.2 Support de Hibernate Spatial pour les données géospatiales Passage à Testcontainers 2 et JUnit 6 Annotations de sécurité supportées sur les repositories Jakarta Data Chiffrement des tokens OIDC pour les implémentations custom TokenStateManager Support OAuth 2.0 Pushed Authorization Requests dans l'extension OIDC Maven 3.9 maintenant requis minimum pour les projets Quarkus A2A Java SDK 1.0.0.Alpha1 - Alignement avec la spécification 1.0 du protocole Agent2Agent - https://quarkus.io/blog/a2a-java-sdk-1-0-0-alpha1/ Le SDK Java A2A implémente le protocole Agent2Agent qui permet la communication standardisée entre agents IA pour découvrir des capacités, déléguer des tâches et collaborer Passage à la version 1.0 de la spécification marque la transition d'expérimental à production-ready avec des changements cassants assumés Modernisation complète du module spec avec des Java records partout remplaçant le mix précédent de classes et records pour plus de cohérence Adoption de Protocol Buffers comme source de vérité avec des mappers MapStruct pour la conversion et Gson pour JSON-RPC Les builders utilisent maintenant des méthodes factory statiques au lieu de constructeurs publics suivant les best practices Java modernes Introduction de trois BOMs Maven pour simplifier la gestion des dépendances du SDK core, des extensions et des implémentations de référence Quarkus AgentCard évolue avec une liste supportedInterfaces remplaçant url et preferredTransport pour plus de flexibilité dans la déclaration des protocoles Support de la pagination ajouté pour ListTasks et les endpoints de configuration des notifications push avec des wrappers Result appropriés Interface A2AHttpClient pluggable permettant des implémentations HTTP personnalisées avec une implémentation Vert.x fournie Travail continu vers la conformité complète avec le TCK 1.0 en cours de développement parallèlement à la finalisation de la spécification Pourquoi Quarkus finit par "cliquer" : les 10 questions que se posent les développeurs Java - https://www.the-main-thread.com/p/quarkus-java-developers-top-questions-2025 un article qui revele et repond aux questions des gens qui ont utilisé Quarkus depuis 4-6 mois, les non noob questions Quarkus est un framework Java moderne optimisé pour le cloud qui propose des temps de démarrage ultra-rapides et une empreinte mémoire réduite Pourquoi Quarkus démarre si vite ? Le framework effectue le travail lourd au moment du build (scanning, indexation, génération de bytecode) plutôt qu'au runtime Quand utiliser le mode réactif plutôt qu'impératif ? Le réactif est pertinent pour les workloads avec haute concurrence et dominance I/O, l'impératif reste plus simple dans les autres cas Quelle est la différence entre Dev Services et Testcontainers ? Dev Services utilise Testcontainers en gérant automatiquement le cycle de vie, les ports et la configuration sans cérémonie Comment la DI de Quarkus diffère de Spring ? CDI est un standard basé sur la sécurité des types et la découverte au build-time, différent de l'approche framework de Spring Comment gérer la configuration entre environnements ? Quarkus permet de scaler depuis le développement local jusqu'à Kubernetes avec des profils, fichiers multiples et configuration externe Comment tester correctement les applications Quarkus ? @QuarkusTest démarre l'application une fois pour toute la suite de tests, changeant le modèle mental par rapport à Spring Boot Que fait vraiment Panache en coulisses ? Panache est du JPA avec des opinions fortes et des défauts propres, enveloppant Hibernate avec un style Active Record Doit-on utiliser les images natives et quand ? Les images natives brillent pour le serverless et l'edge grâce au démarrage rapide et la faible empreinte mémoire, mais tous les apps n'en bénéficient pas Comment Quarkus s'intègre avec Kubernetes ? Le framework génère automatiquement les ressources Kubernetes, gère les health checks et métriques comme s'il était nativement conçu pour cet écosystème Comment intégrer l'IA dans une application Quarkus ? LangChain4j permet d'ajouter embeddings, retrieval, guardrails et observabilité directement en Java sans passer par Python Infrastructure Les alternatives à MinIO https://rmoff.net/2026/01/14/alternatives-to-minio-for-single-node-local-s3/ MinIO a abandonné le support single-node fin 2025 pour des raisons commerciales, cassant de nombreuses démos et pipelines CI/CD qui l'utilisaient pour émuler S3 localement L'auteur cherche un remplacement simple avec image Docker, compatibilité S3, licence open source, déploiement mono-nœud facile et communauté active S3Proxy est très léger et facile à configurer, semble être l'option la plus simple mais repose sur un seul contributeur RustFS est facile à utiliser et inclut une GUI, mais c'est un projet très récent en version alpha avec une faille de sécurité majeure récente SeaweedFS existe depuis 2012 avec support S3 depuis 2018, relativement facile à configurer et dispose d'une interface web basique Zenko CloudServer remplace facilement MinIO mais la documentation et le branding (cloudserver/zenko/scality) peuvent prêter à confusion Garage nécessite une configuration complexe avec fichier TOML et conteneur d'initialisation séparé, pas un simple remplacement drop-in Apache Ozone requiert au minimum quatre nœuds pour fonctionner, beaucoup trop lourd pour un usage local simple L'auteur recommande SeaweedFS et S3Proxy comme remplaçants viables, RustFS en maybe, et élimine Garage et Ozone pour leur complexité Garage a une histoire tres associative, il vient du collectif https://deuxfleurs.fr/ qui offre un cloud distribué sans datacenter C'est certainement pas une bonne idée, les datacenters dans l'espace https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/ Avis d'expert (ex-NASA/Google, Dr en électronique spatiale) : Centres de données spatiaux, une "terrible" idée. Incompatibilité fondamentale : L'électronique (surtout IA/GPU) est inadaptée à l'environnement spatial. Énergie : Accès limité. Le solaire (type ISS) est insuffisant pour l'échelle de l'IA. Le nucléaire (RTG) est trop faible. Refroidissement : L'espace n'est pas "froid" ; absence de convection. Nécessite des radiateurs gigantesques (ex: 531m² pour 200kW). Radiations : Provoque erreurs (SEU, SEL) et dommages. Les GPU sont très vulnérables. Blindage lourd et inefficace. Les puces "durcies" sont très lentes. Communications : Bande passante très limitée (1Gbps radio vs 100Gbps terrestre). Le laser est tributaire des conditions atmosphériques. Conclusion : Projet extrêmement difficile, coûteux et aux performances médiocres. Data et Intelligence Artificielle Guillaume a développé un serveur MCP pour arXiv (le site de publication de papiers de recherche) en Java avec le framework Quarkus https://glaforge.dev/posts/2026/01/18/implementing-an-arxiv-mcp-server-with-quarkus-in-java/ Implémentation d'un serveur MCP (Model Context Protocol) arXiv en Java avec Quarkus. Objectif : Accéder aux publications arXiv et illustrer les fonctionnalités moins connues du protocole MCP. Mise en œuvre : Utilisation du framework Quarkus (Java) et son support MCP étendu. Assistance par Antigravity (IDE agentique) pour le développement et l'intégration de l'API arXiv. Interaction avec l'API arXiv : requêtes HTTP, format XML Atom pour les résultats, parser XML Jackson. Fonctionnalités MCP exposées : Outils (@Tool) : Recherche de publications (search_papers). Ressources (@Resource, @ResourceTemplate) : Taxonomie des catégories arXiv, métadonnées des articles (via un template d'URI). Prompts (@Prompt) : Exemples pour résumer des articles ou construire des requêtes de recherche. Configuration : Le serveur peut fonctionner en STDIO (local) ou via HTTP Streamable (local ou distant), avec une configuration simple dans des clients comme Gemini CLI. Conclusion : Quarkus simplifie la création de serveurs MCP riches en fonctionnalités, rendant les données et services "prêts pour l'IA" avec l'aide d'outils d'IA comme Antigravity. Anthropic ne mettra pas de pub dans Claude https://www.anthropic.com/news/claude-is-a-space-to-think c'est en reaction au plan non public d'OpenAi de mettre de la pub pour pousser les gens au mode payant OpenAI a besoin de cash et est probablement le plus utilisé pour gratuit au monde Anthropic annonce que Claude restera sans publicité pour préserver son rôle d'assistant conversationnel dédié au travail et à la réflexion approfondie. Les conversations avec Claude sont souvent sensibles, personnelles ou impliquent des tâches complexes d'ingénierie logicielle où les publicités seraient inappropriées. L'analyse des conversations montre qu'une part significative aborde des sujets délicats similaires à ceux évoqués avec un conseiller de confiance. Un modèle publicitaire créerait des incitations contradictoires avec le principe fondamental d'être "genuinely helpful" inscrit dans la Constitution de Claude. Les publicités introduiraient un conflit d'intérêt potentiel où les recommandations pourraient être influencées par des motivations commerciales plutôt que par l'intérêt de l'utilisateur. Le modèle économique d'Anthropic repose sur les contrats entreprise et les abonnements payants, permettant de réinvestir dans l'amélioration de Claude. Anthropic maintient l'accès gratuit avec des modèles de pointe et propose des tarifs réduits pour les ONG et l'éducation dans plus de 60 pays. Le commerce "agentique" sera supporté mais uniquement à l'initiative de l'utilisateur, jamais des annonceurs, pour préserver la confiance. Les intégrations tierces comme Figma, Asana ou Canva continueront d'être développées en gardant l'utilisateur aux commandes. Anthropic compare Claude à un cahier ou un tableau blanc : des espaces de pensée purs, sans publicité. Infinispan 16.1 est sorti https://infinispan.org/blog/2026/02/04/infinispan-16-1 déjà le nom de la release mérite une mention Le memory bounded par cache et par ensemble de cache s est pas facile à faire en Java Une nouvelle api OpenAPI AOT caché dans les images container Un serveur MCP local juste avec un fichier Java ? C'est possible avec LangChain4j et JBang https://glaforge.dev/posts/2026/02/11/zero-boilerplate-java-stdio-mcp-servers-with-langchain4j-and-jbang/ Création rapide de serveurs MCP Java sans boilerplate. MCP (Model Context Protocol): standard pour connecter les LLM à des outils et données. Le tutoriel répond au manque d'options simples pour les développeurs Java, face à une prédominance de Python/TypeScript dans l'écosystème MCP. La solution utilise: LangChain4j: qui intègre un nouveau module serveur MCP pour le protocole STDIO. JBang: permet d'exécuter des fichiers Java comme des scripts, éliminant les fichiers de build (pom.xml, Gradle). Implémentation: se fait via un seul fichier .java. JBang gère automatiquement les dépendances (//DEPS). L'annotation @Tool de LangChain4j expose les méthodes Java aux LLM. StdioMcpServerTransport gère la communication JSON-RPC via l'entrée/sortie standard (STDIO). Point crucial: Les logs doivent impérativement être redirigés vers System.err pour éviter de corrompre System.out, qui est réservé à la communication MCP (messages JSON-RPC). Facilite l'intégration locale avec des outils comme Gemini CLI, Claude Code, etc. Reciprocal Rank Fusion : un algorithme utile et souvent utilisé pour faire de la recherche hybride, pour mélanger du RAG et des recherches par mots-clé https://glaforge.dev/posts/2026/02/10/advanced-rag-understanding-reciprocal-rank-fusion-in-hybrid-search/ RAG : Qualité LLM dépend de la récupération. Recherche Hybride : Combiner vectoriel et mots-clés (BM25) est optimal. Défi : Fusionner des scores d'échelles différentes. Solution : Reciprocal Rank Fusion (RRF). RRF : Algorithme robuste qui fusionne des listes de résultats en se basant uniquement sur le rang des documents, ignorant les scores. Avantages RRF : Pas de normalisation de scores, scalable, excellente première étape de réorganisation. Architecture RAG fréquente : RRF (large sélection) + Cross-Encoder / modèle de reranking (précision fine). RAG-Fusion : Utilise un LLM pour générer plusieurs variantes de requête, puis RRF agrège tous les résultats pour renforcer le consensus et réduire les hallucinations. Implémentation : LangChain4j utilise RRF par défaut pour agréger les résultats de plusieurs retrievers. Les dernières fonctionnalités de Gemini et Nano Banana supportées dans LangChain4j https://glaforge.dev/posts/2026/02/06/latest-gemini-and-nano-banana-enhancements-in-langchain4j/ Nouveaux modèles d'images Nano Banana (Gemini 2.5/3.0) pour génération et édition (jusqu'à 4K). "Grounding" via Google Search (pour images et texte) et Google Maps (localisation, Gemini 2.5). Outil de contexte URL (Gemini 3.0) pour lecture directe de pages web. Agents multimodaux (AiServices) capables de générer des images. Configuration de la réflexion (profondeur Chain-of-Thought) pour Gemini 3.0. Métadonnées enrichies : usage des tokens et détails des sources de "grounding". Comment configurer Gemini CLI comment agent de code dans IntelliJ grâce au protocole ACP https://glaforge.dev/posts/2026/02/01/how-to-integrate-gemini-cli-with-intellij-idea-using-acp/ But : Intégrer Gemini CLI à IntelliJ IDEA via l'Agent Client Protocol (ACP). Prérequis : IntelliJ IDEA 2025.3+, Node.js (v20+), Gemini CLI. Étapes : Installer Gemini CLI (npm install -g @google/gemini-cli). Localiser l'exécutable gemini. Configurer ~/.jetbrains/acp.json (chemin exécutable, --experimental-acp, use_idea_mcp: true). Redémarrer IDEA, sélectionner "Gemini CLI" dans l'Assistant IA. Usage : Gemini interagit avec le code et exécute des commandes (contexte projet). Important : S'assurer du flag --experimental-acp dans la configuration. Outillage PipeNet, une alternative (open source aussi) à LocalTunnel, mais un plus évoluée https://pipenet.dev/ pipenet: Alternative open-source et moderne à localtunnel (client + serveur). Usages: Développement local (partage, webhooks), intégration SDK, auto-hébergement sécurisé. Fonctionnalités: Client (expose ports locaux, sous-domaines), Serveur (déploiement, domaines personnalisés, optimisé cloud mono-port). Avantages vs localtunnel: Déploiement cloud sur un seul port, support multi-domaines, TypeScript/ESM, maintenance active. Protocoles: HTTP/S, WebSocket, SSE, HTTP Streaming. Intégration: CLI ou SDK JavaScript. JSON-IO — une librairie comme Jackson ou GSON, supportant JSON5, TOON, et qui pourrait être utile pour l'utilisation du "structured output" des LLMs quand ils ne produisent pas du JSON parfait https://github.com/jdereg/json-io json-io : Librairie Java pour la sérialisation et désérialisation JSON/TOON. Gère les graphes d'objets complexes, les références cycliques et les types polymorphes. Support complet JSON5 (lecture et écriture), y compris des fonctionnalités non prises en charge par Jackson/Gson. Format TOON : Notation orientée token, optimisée pour les LLM, réduisant l'utilisation de tokens de 40 à 50% par rapport au JSON. Légère : Aucune dépendance externe (sauf java-util), taille de JAR réduite (~330K). Compatible JDK 1.8 à 24, ainsi qu'avec les environnements JPMS et OSGi. Deux modes de conversion : vers des objets Java typés (toJava()) ou vers des Map (toMaps()). Options de configuration étendues via ReadOptionsBuilder et WriteOptionsBuilder. Optimisée pour les déploiements cloud natifs et les architectures de microservices. Utiliser mailpit et testcontainer pour tester vos envois d'emails https://foojay.io/today/testing-emails-with-testcontainers-and-mailpit/ l'article montre via SpringBoot et sans. Et voici l'extension Quarkus https://quarkus.io/extensions/io.quarkiverse.mailpit/quarkus-mailpit/?tab=docs Tester l'envoi d'emails en développement est complexe car on ne peut pas utiliser de vrais serveurs SMTP Mailpit est un serveur SMTP de test qui capture les emails et propose une interface web pour les consulter Testcontainers permet de démarrer Mailpit dans un conteneur Docker pour les tests d'intégration L'article montre comment configurer une application SpringBoot pour envoyer des emails via JavaMail Un module Testcontainers dédié à Mailpit facilite son intégration dans les tests Le conteneur Mailpit expose un port SMTP (1025) et une API HTTP (8025) pour vérifier les emails reçus Les tests peuvent interroger l'API HTTP de Mailpit pour valider le contenu des emails envoyés Cette approche évite d'utiliser des mocks et teste réellement l'envoi d'emails Mailpit peut aussi servir en développement local pour visualiser les emails sans les envoyer réellement La solution fonctionne avec n'importe quel framework Java supportant JavaMail Architecture Comment scaler un système de 0 à 10 millions d'utilisateurs https://blog.algomaster.io/p/scaling-a-system-from-0-to-10-million-users Philosophie : Scalabilité incrémentale, résoudre les goulots d'étranglement sans sur-ingénierie. 0-100 utilisateurs : Serveur unique (app, DB, jobs). 100-1K : Séparer app et DB (services gérés, pooling). 1K-10K : Équilibreur de charge, multi-serveurs d'app (stateless via sessions partagées). 10K-100K : Caching, réplicas de lecture DB, CDN (réduire charge DB). 100K-500K : Auto-scaling, applications stateless (authentification JWT). 500K-10M : Sharding DB, microservices, files de messages (traitement asynchrone). 10M+ : Déploiement multi-régions, CQRS, persistance polyglotte, infra personnalisée. Principes clés : Simplicité, mesure, stateless essentiel, cache/asynchrone, sharding prudent, compromis (CAP), coût de la complexité. Patterns d'Architecture 2026 - Du Hype à la Réalité du Terrain (Part 1/2) - https://blog.ippon.fr/2026/01/30/patterns-darchitecture-2026-part-1/ L'article présente quatre patterns d'architecture logicielle pour répondre aux enjeux de scalabilité, résilience et agilité business dans les systèmes modernes Il présentent leurs raisons et leurs pièges Un bon rappel L'Event-Driven Architecture permet une communication asynchrone entre systèmes via des événements publiés et consommés, évitant le couplage direct Les bénéfices de l'EDA incluent la scalabilité indépendante des composants, la résilience face aux pannes et l'ajout facile de nouveaux cas d'usage Le pattern API-First associé à un API Gateway centralise la sécurité, le routage et l'observabilité des APIs avec un catalogue unifié Le Backend for Frontend crée des APIs spécifiques par canal (mobile, web, partenaires) pour optimiser l'expérience utilisateur CQRS sépare les modèles de lecture et d'écriture avec des bases optimisées distinctes, tandis que l'Event Sourcing stocke tous les événements plutôt que l'état actuel Le Saga Pattern gère les transactions distribuées via orchestration centralisée ou chorégraphie événementielle pour coordonner plusieurs microservices Les pièges courants incluent l'explosion d'événements granulaires, la complexité du debugging distribué, et la mauvaise gestion de la cohérence finale Les technologies phares sont Kafka pour l'event streaming, Kong pour l'API Gateway, EventStoreDB pour l'Event Sourcing et Temporal pour les Sagas Ces patterns nécessitent une maturité technique et ne sont pas adaptés aux applications CRUD simples ou aux équipes junior Patterns d'architecture 2026 : du hype à la réalité terrain part. 2 - https://blog.ippon.fr/2026/02/04/patterns-darchitecture-2026-part-2/ Deuxième partie d'un guide pratique sur les patterns d'architecture logicielle et système éprouvés pour moderniser et structurer les applications en 2026 Strangler Fig permet de migrer progressivement un système legacy en l'enveloppant petit à petit plutôt que de tout réécrire d'un coup (70% d'échec pour les big bang) Anti-Corruption Layer protège votre nouveau domaine métier des modèles externes et legacy en créant une couche de traduction entre les systèmes Service Mesh gère automatiquement la communication inter-services dans les architectures microservices (sécurité mTLS, observabilité, résilience) Architecture Hexagonale sépare le coeur métier des détails techniques via des ports et adaptateurs pour améliorer la testabilité et l'évolutivité Chaque pattern est illustré par un cas client concret avec résultats mesurables et liste des pièges à éviter lors de l'implémentation Les technologies 2026 mentionnées incluent Istio, Linkerd pour service mesh, LaunchDarkly pour feature flags, NGINX et Kong pour API gateway Tableau comparatif final aide à choisir le bon pattern selon la complexité, le scope et le use case spécifique du projet L'article insiste sur une approche pragmatique : ne pas utiliser un pattern juste parce qu'il est moderne mais parce qu'il résout un problème réel Pour les systèmes simples type CRUD ou avec peu de services, ces patterns peuvent introduire une complexité inutile qu'il faut savoir éviter Méthodologies Le rêve récurrent de remplacer voire supprimer les développeurs https://www.caimito.net/en/blog/2025/12/07/the-recurring-dream-of-replacing-developers.html Depuis 1969, chaque décennie voit une tentative de réduire le besoin de développeurs (de COBOL, UML, visual builders… à IA). Motivation : frustration des dirigeants face aux délais et coûts de développement. La complexité logicielle est intrinsèque et intellectuelle, non pas une question d'outils. Chaque vague technologique apporte de la valeur mais ne supprime pas l'expertise humaine. L'IA assiste les développeurs, améliore l'efficacité, mais ne remplace ni le jugement ni la gestion de la complexité. La demande de logiciels excède l'offre car la contrainte majeure est la réflexion nécessaire pour gérer cette complexité. Pour les dirigeants : les outils rendent-ils nos développeurs plus efficaces sur les problèmes complexes et réduisent-ils les tâches répétitives ? Le "rêve" de remplacer les développeurs, irréalisable, est un moteur d'innovation créant des outils précieux. Comment creuser des sujets à l'ère de l'IA générative. Quid du partage et la curation de ces recherches ? https://glaforge.dev/posts/2026/02/04/researching-topics-in-the-age-of-ai-rock-solid-webhooks-case-study/ Recherche initiale de l'auteur sur les webhooks en 2019, processus long et manuel. L'IA (Deep Research, Gemini, NotebookLM) facilite désormais la recherche approfondie, l'exploration de sujets et le partage des résultats. L'IA a identifié et validé des pratiques clés pour des déploiements de webhooks résilients, en grande partie les mêmes que celles trouvées précédemment par l'auteur. Génération d'artefacts par l'IA : rapport détaillé, résumé concis, illustration sketchnote, et même une présentation (slide deck). Guillaume s'interroge sur le partage public de ces rapports de recherche générés par l'IA, tout en souhaitant éviter le "AI Slop". Loi, société et organisation Le logiciel menacé par le vibe coding https://www.techbuzz.ai/articles/we-built-a-monday-com-clone-in-under-an-hour-with-ai Deux journalistes de CNBC sans expérience de code ont créé un clone fonctionnel de Monday.com en moins de 60 minutes pour 5 à 15 dollars. L'expérience valide les craintes des investisseurs qui ont provoqué une baisse de 30% des actions des entreprises SaaS. L'IA a non seulement reproduit les fonctionnalités de base mais a aussi recherché Monday.com de manière autonome pour identifier et recréer ses fonctionnalités clés. Cette technique appelée "vibe-coding" permet aux non-développeurs de construire des applications via des instructions en anglais courant. Les entreprises les plus vulnérables sont celles offrant des outils "qui se posent sur le travail" comme Atlassian, Adobe, HubSpot, Zendesk et Smartsheet. Les entreprises de cybersécurité comme CrowdStrike et Palo Alto sont considérées plus protégées grâce aux effets de réseau et aux barrières réglementaires. Les systèmes d'enregistrement comme Salesforce restent plus difficiles à répliquer en raison de leur profondeur d'intégration et de données d'entreprise. Le coût de 5 à 15 dollars par construction permet aux entreprises de prototyper plusieurs solutions personnalisées pour moins cher qu'une seule licence Monday.com. L'expérience soulève des questions sur la pérennité du marché de 5 milliards de dollars des outils de gestion de projet face à l'IA générative. Conférences En complément de l'agenda des conférences de Aurélie Vache, il y a également le site https://javaconferences.org/ (fait par Brian Vermeer) avec toutes les conférences Java à venir ! La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-13 février 2026 : Touraine Tech #26 - Tours (France) 12-13 février 2026 : World Artificial Intelligence Cannes Festival - Cannes (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18 mars 2026 : Jupyter Workshops: AI in Jupyter: Building Extensible AI Capabilities for Interactive Computing - Saint-Maur-des-Fossés (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 31 mars 2026-1 avril 2026 : FlowCon France 2026 - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 2-3 avril 2026 : Xen Spring Meetup 2026 - Grenoble (France) 7 avril 2026 : PyTorch Conference Europe - Paris (France) 9-10 avril 2026 : Android Makers by droidcon 2026 - Paris (France) 9-11 avril 2026 : Drupalcamp Grenoble 2026 - Grenoble (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 17-18 avril 2026 : Faiseuses du Web 5 - Dinan (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 12 mai 2026 : Lead Innovation Day - Leadership Edition - Paris (France) 19 mai 2026 : La Product Conf Paris 2026 - Paris (France) 21-22 mai 2026 : Flupa UX Days 2026 - Paris (France) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 28 mai 2026 : DevCon 27 : I.A. & Vibe Coding - Paris (France) 28 mai 2026 : Cloud Toulouse 2026 - Toulouse (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 29 mai 2026 : Agile Tour Strasbourg 2026 - Strasbourg (France) 2-3 juin 2026 : Agile Tour Rennes 2026 - Rennes (France) 2-3 juin 2026 : OW2Con - Paris-Châtillon (France) 3 juin 2026 : IA–NA - La Rochelle (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 9 juin 2026 : JFTL - Montrouge (France) 9 juin 2026 : C: - Caen (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 16 juin 2026 : Mobilis In Mobile 2026 - Nantes (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 18 juin 2026 : Tech'Work - Lyon (France) 22-26 juin 2026 : Galaxy Community Conference - Clermont-Ferrand (France) 24-25 juin 2026 : Agi'Lille 2026 - Lille (France) 24-26 juin 2026 : BreizhCamp 2026 - Rennes (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 6-8 juillet 2026 : Riviera Dev - Sophia Antipolis (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 20-22 août 2026 : 4th Tech Summit on AI & Robotics - Paris (France) & Online 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
¿Te preocupa tener tus claves y contraseñas en texto plano? En este episodio 770 de Atareao con Linux, te explico por qué deberías dejar de usar variables de entorno tradicionales y cómo Podman Secrets puede salvarte el día. Yo mismo he pasado años ignorando este problema en Docker por la pereza de configurar Swarm, pero con Podman, la seguridad viene de serie.Hablaremos en profundidad sobre el ciclo de vida de los secretos: cómo crearlos, listarlos, inspeccionarlos y borrarlos. Te mostraré cómo Podman gestiona estos datos sensibles fuera de las imágenes y fuera del alcance de miradas indiscretas en el historial de Bash. Es un cambio de paradigma para cualquier SysAdmin o entusiasta del Self-hosting.Pero no nos quedamos ahí. Te presento Crypta, mi nueva herramienta escrita en Rust que integra SOPS, Age y Git para que puedas gestionar tus secretos de forma profesional, permitiendo incluso la sincronización con repositorios remotos. Veremos cómo configurar drivers personalizados y cómo usar secretos en tus despliegues con MariaDB y Quadlets.Capítulos destacados:00:00:00 El peligro de las contraseñas en texto plano00:01:23 El problema con Docker Swarm y por qué elegir Podman00:03:16 ¿Qué es realmente un Secreto en Podman?00:04:22 Ciclo de vida: Creación y muerte de un secreto00:08:10 Implementación práctica en MariaDB y Quadlets00:12:04 Presentando Crypta: Gestión con SOPS, Age y Rust00:19:40 Ventajas de usar secretos en modo RootlessSi quieres que tu infraestructura sea realmente segura y coherente, este episodio es una hoja de ruta esencial. Aprende a ocultar lo que debe estar oculto y a dormir tranquilo sabiendo que tus tokens de API no están al alcance de cualquiera.Más información y enlaces en las notas del episodio
En el anterior episodio hablamos largo y tendido sobre los "homelabs" o laboratorios de prueba informáticos que muchos tenemos en casa. Hemos recibido muchísimos comentarios y hoy repasamos qué tenéis cada uno en casa, y aprendemos juntos sobre muchísimas de estas herramientas. Además, os dejamos una lista de enlaces de todas estas herramientas y hardware para que podáis empezar a montar vuestra propia versión para aprender y probar cosas nuevas: Herramientas Guía de Iban para una transición a alternativas europeas Home Assistant (domótica libre) Kopia (copias de seguridad) Tailscale (VPN entre tus dispositivos, open-source con headscale) authentik (proveedor de identidad privado) immich (gestor de fotos) Komga (gestor de cómics, libros) plex (gestor multimedia de pago) Jellyfin (gestor multimedia) Omoide (gestor multimedia) TeslaMate (gestión de tu Tesla) Heimdall (landing page) Syncthing (sincronización de ficheros) Proxmox (virtualización) Adguard (bloqueo de publicidad) Pi-hole (DNS con bloqueo de publicidad u otras categorías) Unbound (DNS local) Mealie (gestor de recetas de cocina) Obsidian (gestor de notas) K3S (Kubernetes liviano) WireGuard (VPN) podman (contenedores) Docker (contenedores) Harbor (repositorio de contenedores) Verdaccio (registro NPM) Forgejo (repositorios Git) Gitea (repositorios Git) RustFS (servidor S3) cert-manager (certificados TLS en Kubernetes) step-ca (Let's Encrypt local) TrueNAS (SO para NAS) Kiwix (copia local de wikipedia y otras wikis) Prometheus (métricas y monitorización) Grafana (gráficos de métricas) ArgoCD (CI/CD) FluxCD (CI/CD) vLLM (IA generativa local compatible con API de OpenAI) Open WebUI (interfaz web para IA generativa) Hardware Switchbot (domótica) Shelly (relés y domótica) Aqara (domótica) Eve (domótica) Inels Wireless (domótica) Reolink (cámaras de seguridad) GMKtec (mini-PCs) EliteDesk (mini-PCs) QNAP (NAS) Synology (NAS) Raspberry Pi (mini-PCs) Noticias IKEA lanza 21 nuevos productos para un hogar inteligente Sánchez anuncia que España prohibirá acceder a las redes sociales a los menores de 16 años El fundador de Telegram carga contra Pedro Sánchez y alerta a España con un mensaje masivo Música del episodio Introducción: Safe and Warm in Hunter's Arms - Roller Genoa Cierre: Inspiring Course Of Life - Alex Che Puedes encontrarnos en Mastodon y apoyarnos escuchando nuestro podcast en Podimo o haciéndote fan en iVoox. Si quieres un mes gratis en iVoox Premium, haz click aquí.
Caleb Serong on becoming a Freo Docker for life. The funny names you've come across. Our producer Matt is almost taken out at work! What's your ick?See omnystudio.com/listener for privacy information.
In this episode I talk with JP Camara about RubyConf 2026, submitting CFPs, and why everyone should give talks. JP shares his experience using SaturnCI on the Mastodon project, and we dig into Saturn CI's Docker-based setup, Kubernetes architecture, and test-focused UX philosophy.Links:jpcamara.comSaturnCINonsense Monthly
Topics covered in this episode: Command Book App uvx.sh: Install Python tools without uv or Python Ending 15 years of subprocess polling monty: A minimal, secure Python interpreter written in Rust for use by AI Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Command Book App New app from Michael Command Book App is a native macOS app for developers, data scientists, AI enthusiasts and more. This is a tool I've been using lately to help build Talk Python, Python Bytes, Talk Python Training, and many more applications. It's a bit like advanced terminal commands or complex shell aliases, but hosted outside of your terminal. This leaves the terminal there for interactive commands, exploration, short actions. Command Book manages commands like "tail this log while I'm developing the app", "Run the dev web server with true auto-reload", and even "Run MongoDB in Docker with exactly the settings I need" I'd love it if you gave it a look, shared it with your team, and send me feedback. Has a free version and paid version. Build with Swift and Swift UI Check it out at https://commandbookapp.com Brian #2: uvx.sh: Install Python tools without uv or Python Tim Hopper Michael #3: Ending 15 years of subprocess polling by Giampaolo Rodola The standard library's subprocess module has relied on a busy-loop polling approach since the timeout parameter was added to Popen.wait() in Python 3.3, around 15 years ago The problem with busy-polling CPU wake-ups: even with exponential backoff (starting at 0.1ms, capping at 40ms), the system constantly wakes up to check process status, wasting CPU cycles and draining batteries. Latency: there's always a gap between when a process actually terminates and when you detect it. Scalability: monitoring many processes simultaneously magnifies all of the above. + L1/L2 CPU cache invalidations It's interesting to note that waiting via poll() (or kqueue()) puts the process into the exact same sleeping state as a plain time.sleep() call. From the kernel's perspective, both are interruptible sleeps. Here is the merged PR for this change. Brian #4: monty: A minimal, secure Python interpreter written in Rust for use by AI Samuel Colvin and others at Pydantic Still experimental “Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code. “ “Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.” Extras Brian: Expertise is the art of ignoring - Kevin Renskers You don't need to master the language. You need to master your slice. Learning everything up front is wasted effort. Experience changes what you pay attention to. I hate fish - Rands (Michael Lopp) Really about productivity systems And a nice process for dealing with email Michael: Talk Python now has a CLI New essay: It's not vibe coding - Agentic engineering GitHub is having a day Python 3.14.3 and 3.13.12 are available Wall Street just lost $285 billion because of 13 markdown files Joke: Silence, current side project!
It's been a while since we did a deep dive on our home networking and server infrastructure (what some might call a "homelab"), so it's time for the 2026 check-in to run down what we're working with these days. By request, we spend a big chunk of the episode on Brad's plain Linux NAS/server, detailing components like Samba, Docker (or Podman), and Sanoid that you'd need to set up yourself to replicate the functionality of something like TrueNAS or Unraid. We also survey Will's more granular approach, once again pine longingly after Wildcat Lake, and more.Show notes with all the hardware and software we mentioned: https://tinyurl.com/techpod-325-homelab-update Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
If you like what you hear, please subscribe, leave us a review and tell a friend!
¿Sigues usando Docker porque te da pereza el cambio? En este episodio de Atareao con Linux te voy a demostrar por qué los Quadlets son la razón definitiva para que te decantes por Podman de una vez por todas. Si ya te hablé de los Pods y te pareció interesante, lo de hoy es llevar la gestión de contenedores al siguiente nivel: la integración TOTAL con SystemD.En el episodio 768, te explico cómo los Quadlets permiten gestionar tus contenedores, volúmenes y redes exactamente como si fueran servicios nativos de tu sistema operativo. Olvídate de scripts extraños o de depender de herramientas externas; aquí todo se define con archivos de configuración sencillos (.container, .network, .volume) que SystemD entiende a la perfección.Te cuento mi experiencia real migrando mis proyectos actuales. Ya tengo bases de datos PostgreSQL funcionando bajo este modelo y la estabilidad es, simplemente, de otro planeta. Veremos cómo levantar un stack completo de WordPress con MariaDB y Redis utilizando esta tecnología, gestionando las dependencias entre ellos con las directivas 'After' y 'Requires' de SystemD. ¡Se acabó el que un contenedor intente arrancar antes de que la base de datos esté lista!Capítulos del episodio: 00:00:00 Introducción y el adiós definitivo a Docker 00:01:33 ¿Qué es un Quadlet y por qué revoluciona Linux? 00:03:22 Los 6 tipos de Quadlets disponibles 00:05:12 Cómo gestionar un Quadlet de tipo contenedor 00:06:46 Definiendo Redes y Volúmenes como servicios 00:08:13 El flujo de trabajo: Git, secretos y portabilidad 00:11:22 Integración con SystemD: Nombres y prefijos 00:13:42 Desplegando un Stack completo: WordPress, MariaDB y Redis 00:16:02 Modificando contenedores y recarga de SystemD (Daemon-reload) 00:17:50 Logs con JournalCTL y mantenimiento simplificado 00:19:33 Auto-update: Olvídate de Watchtower para siempre 00:20:33 Conclusiones y próximos pasos en la migraciónAdemás, exploramos ventajas brutales como el control de versiones mediante Git, la gestión de logs centralizada con JournalCTL y las actualizaciones automáticas nativas que harán que te olvides de Watchtower. Si quieres que tu servidor Linux sea más profesional, robusto y fácil de mantener, no puedes perderte este audio.Más información y enlaces en las notas del episodio
In May of 2025, Docker launched Hardened Images, a secure, minimal, production-ready set of images. In December, they made DHI freely available and open source to everyone who builds software. On this episode, we're joined by Tushar Jain, EVP of Engineering at Docker to learn all about it.
In this episode of Cybersecurity Today, Jim Love covers major vulnerabilities and security threats, including the exposure of over 3 million Fortinet devices, a critical flaw in Docker's AI assistant, and a sophisticated Android malware campaign using Hugging Face repositories. Discover the latest updates on these critical issues and gain insights into the measures being taken to mitigate these threats. Sponsored by Meter, providing integrated networking solutions for performance and scale. Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst 00:00 Introduction and Sponsor Message 00:43 Fortinet Devices Vulnerability 03:35 Docker AI Assistant Security Flaw 06:27 Hugging Face Android Malware Campaign 09:25 Conclusion and Sponsor Message
In this episode of the Ardan Labs Podcast, Ale Kennedy debuts as host in her first episode, sitting down with Oscar Hedaya, founder of SPACE, to discuss building startups, navigating uncertainty, and launching innovative products.Oscar shares his journey from New Jersey to Miami, the childhood financial challenges that shaped his work ethic, and the lessons learned from college, job searching, and early setbacks. The conversation explores what it takes to start a company, develop a physical product in a competitive market, and turn setbacks into momentum. Together, Ale and Oscar examine persistence, partnership dynamics, and how identifying gaps in the market led to the creation of The Space Safe.00:00 Introduction and Background02:13 Smart Safes and Security Innovation07:14 Childhood and Early Influences12:57 College Applications and Transitions28:51 College Decisions and Academic Paths42:15 Graduation and Job Market Reality54:26 Starting a Business59:43 Restarting the Entrepreneurial Journey01:10:29 The Birth of The Space Safe01:18:48 Product Development Challenges01:23:49 Launching SpaceSafeConnect with Oscar: LinkedIn: https://www.linkedin.com/in/ohedaya/Mentioned in this Episode:The Space Safe Website: https://www.thespacesafe.comWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
In May of 2025, Docker launched Hardened Images, a secure, minimal, production-ready set of images. In December, they made DHI freely available and open source to everyone who builds software. On this episode, we're joined by Tushar Jain, EVP of Engineering at Docker to learn all about it.
Allen, Joel, and Yolanda discuss the North Sea Summit where nine European countries committed to 100 gigawatts of offshore wind capacity and the massive economic impact that comes with it. They also break down the federal court ruling that allows Vineyard Wind to resume construction with a tight 45-day window before installation vessels leave. Plus GE Vernova’s Q4 results show $600 million in wind losses and Wind Power Lab CEO Lene Helstern raises concerns about blade quality across the industry. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com. And now your hosts, Allen Hall, Rosemary Barnes, Joel Saxum, and Yolanda Padron. Speaker 2: Welcome to the Uptime Wind Energy Podcast. I’m your host, Alln Hall. I’m here with Yolanda Padron and Joel Saxum. Rosemary Barnes is snorkeling at the Greek Barrier Reef this week, uh, big news out of Northern Europe. Uh, the Northeast Summit, which happened in Hamburg, uh, about a week or so ago, nine European countries are. Making a huge commitment for offshore wind. So it’s the, the countries involved are Britain, Belgium, Denmark, France, Germany, Iceland, question Mark Ireland, Luxembourg, Netherlands, and Norway. That together they want to develop [00:01:00] 100 gigawatts of offshore wind capacity in shared waters. Uh, that’s enough to power about. 85 million households and the PAC comes as Europe is trying to wean itself from natural gas from where they had it previously and the United States. Uh, so they, they would become electricity in independent. Uh, and this is one way to do it. Two big happy, uh, companies. At the moment, Vattenfall who develops s lot offshore and Siemens gaa of course, are really excited by the news. If you run the numbers and you, you, you have a hundred gigawatts out in the water and you’re using 20 megawatt turbines, then you’re talking about 5,000 turbines in the water total. That is a huge offshore wind order, and I, I think this would be great news for. Obviously Vestas and [00:02:00] Siemens cesa. Uh, the, the question is there’s a lot of political maneuvering that is happening. It looks like Belgium, uh, as a country is not super active and offshore and is rethinking it and trying to figure out where they want to go. But I think the big names will stay, right? France and Germany, all in on offshore. Denmark will be Britain already is. So the question really is at the moment then. Can Siemens get back into the win game and start making money because they have projected themselves to be very profitable coming this year, into this year. This may be the, the stepping stone, Joel. Joel Saxum: Well, I think that, yeah, we talked about last week their 21 megawatt, or 21 and a half megawatt. I believe it is. Big new flagship going to be ready to roll, uh, with the big auctions happening like AR seven in the uk. Uh, and you know, that’s eight gigawatts, 8.4 gigawatts there. People are gonna be, the, the order book’s gonna start to fill up, like [00:03:00]Siemens is, this is a possibility of a big turnaround. And to put some of these numbers in perspective, um, a hundred gigawatts of offshore wind. So what does that really mean? Right? Um, what it means is if you, if you take the, if you take two of the industrial big industrial powerhouses that are a part of this pact, the UK and Germany combine their total demand. That’s a hundred gigawatt. That’s what they, that’s what their demand is basically on a, you know, today. Right? So that’s gonna continue to grow, right? As, uh, we electrify a lot of things. And the indus, you know, the, the next, the Industrial Revolution 4.0 or whatever we’re calling it now is happening. Um, that’s, that’s a possibility, right? So this a hundred gigawatts of offshore wind. Is gonna drive jobs all up all over Europe. Right. This isn’t just a jobs at the port in Rotterdam or wherever it may be. Right? This is, this is manufacturing jobs, supply chain jobs, the same stuff we’ve been talking about on the podcast for a while here with [00:04:00] what the UK is doing with OWGP and the, or e Catapult and all the kind of the monies that the, the, the Crown and, and other, uh, private entities are putting in there. They’re starting to really, they’re, or this a hundred gigawatts is really gonna look like building out that local supply chain. Jobs, all these different things. ’cause Alan, like you, you mentioned off air. If you look at a hundred gigawatts of offshore wind, that’s $200 billion or was to put it in Euros, 175 billion euros, 170 billion euros, just in turbine orders. Right. That doesn’t mean, or that doesn’t cover ships, lodging, food, like, you know, everything around the ports like tools, PPE, all of the stuff that’s needed by this industry. I mean, there’s a, there’s a trillion dollar impact here. Speaker 2: Oh, it’s close. Yeah. It’s at least 500 billion, I would say. And Yolanda, from the asset management side, have we seen anything of this scale to manage? It does seem like there’d be a lot of [00:05:00] turbines in the water. A whole bunch of moving pieces, ships, turbines, cables, transformers, substations, going different directions. How, what kind of infrastructure is that going to take? Yolanda Padron: You know, a lot of the teams that are there, they’re used to doing this on a grand scale, but globally, right? And so having this be all at once in the UK is definitely gonna be interesting. It’ll be a good opportunity for everybody to take all of the lessons learned to, to just try to make sure that they don’t come across any issues that they might have seen in the past, in other sites, in other countries. They just bring everything back home to their countries and then just make sure that everything’s fine. Um, from like development, construction, and, and operations. Joel Saxum: I was thinking about that. Just thinking about development, construction, operations, right? So some of [00:06:00] these sites we’re thinking about like how, you know, that, that, that map of offshore wind in, in the Northern Atlantic, right? So if this is gonna go and we’re talking about the countries involved here, Norway, Germany, Denmark, France, Belgium, you’re gonna have it all over. So into the Baltic Sea. Around Denmark, into the Norwegian waters, uk, Ireland all the way over, and Iceland is there. I don’t think there’s gonna be any development there. I think maybe they’re just there as a, as cheerleaders. Um, offtake, possibly, yes. Some cables running over there. But you’re going to need to repurpose some of the existing infrastructure, or you’re not, not, you’re going to need to, you’re going to get the opportunity to, and this hasn’t happened in offshore wind yet, right? So. Basically repowering offshore wind, and you’re going to be able to look at, you know, you’re not doing, um, greenfield geotechnical work and greenfield, um, sub c mapping. Like, some of those things are done right, or most of those things are done. So there, I know there’s a lot of, like, there’s a, there’s two and [00:07:00] three and six and seven megawatt turbines all over the North Atlantic, so we’re gonna be able to pop some of those up. Put some 15 and 20 megawatt machines in place there. I mean, of course you’re not gonna be able to reuse the same mono piles, but when it comes to Yolanda, like you said, the lessons learned, Hey, the vessel plans for this area are done. The how, how, how we change crews out here, the CTVs and now and SOVs into port and that stuff, that those learnings are done. How do we maintain export cables and inter array cables with the geotechnic here, you’re not in a green field, you’re in a brown field. That, that, that work. A lot of those lessons learned. They’re done, right? You’ve, you’ve stumbled through them, you’ve made those mistakes. You’ve had to learn on the fly and go ahead here. But when you go to the next phase of Repowering, an offshore wind farm, the the Dev X cost is gonna go way down, in my opinion. Now, someone, someone may fight back on that and say, well, we have to go do some demolition or something of that sort. I’m not sure, but [00:08:00] Yolanda Padron: yeah. But I think, you know. We like to complain sometimes in the US about how some of the studies just aren’t catered toward us, right? And so we’ve seen it a lot and it’s a lot of the studies that are made are just made in Europe where, where this is all taking place. So it’s gonna be really, really interesting to see such a massive growth where everything’s being developed and where the studies are localized from where. You have this very niche area and they can, they’ve studied it. They know exactly what’s going on there. And to your point, they’ve seen a lot of, they’ve minimized the risk, like the environmental risks as much as they could. Right. And so it’s, it’s going to be really, really interesting to have them Joel Saxum: ensuring and financing these projects should be way easier Speaker 2: when Europe is saying that the industry has pledged to cut costs by 30% between. 20, 25 and 2040. So you would think that the turbine [00:09:00] costs and the installation costs would have to be really cost conscious on the supply chain and, uh, taking lessons learned from the previous generations of offshore wind. I think that makes sense. 30% is still a lot, and I, I think the, the feeling I’m getting from this is, Hey, we’re making a hundred gigawatt commitment to this industry. You have to work really hard to deliver a efficient product, get the cost down so it’s not costing as much as, you know. Could do if we, if we did it today, and we’re kind of in from an offshore standpoint over in Europe, what a generation are we in, in terms of turbines three? Are we going into four? A lot of lessons learned. Joel Saxum: Yeah. The, the new Siemens one’s probably generation four. Yeah. I would say generation four in the new, because you went from like the two and three megawatt machines. Like there’s like Vesta three megawatts all over the place, and then you went into the directive [00:10:00] machines. You got into that seven and eight megawatt class, and then you got into the, where we’re at now, the 15, the 12 and 15 megawatt units, the Docker bank style stuff, and then I would say generation four is the, yeah, the Siemens 21 and a half machine. Um, that’s a good way to look at it. Alan four we’re on the fourth generation of offshore wind and, and so it’s Generation one is about ready to start being cycled. There’s some, and some of these are easier, they’re nearer to shore. We’ll see what, uh, who starts to take those projects on. ’cause that’s gonna be an undertaking too. Question on the 30%, uh, wind Europe says industry has pledged to cut cost by 30% by 20. Is that. LCOE or is it devex costs or is it operational costs or did they, were they specific on it or they just kinda like cut cutting costs? Speaker 2: My recollection when that first came about, which was six months ago, maybe a little longer, it was LCOE, [00:11:00] right? So they’re, they’re trying to drive down the, uh, dollars per, or euros per megawatt hour output, but that the capital costs, if the governments can help with the capital costs. On the interest rates, just posting bonds and keeping that down, keeping the interest rates low for these projects by funding them somehow or financing them, that will help a tremendous amount. ’cause if. Interest rates remain high. I know Europe is much lower than it is in the United States at the minute, but if they interest rates start to creep up, these projects will not happen. They’re marginal Joel Saxum: because you have your central in, in, in Europe, you have your central bank interest rates, but even like the f the, the Indi Individual nation states will subsidize that. Right? Like if you go to buy a house in Denmark right now, you pay like 1.2%. Interest Speaker 2: compared to what, six and a half right now in the states? Yeah, it’s low. Speaker 4: Australia’s wind farms are [00:12:00] growing fast. But are your operations keeping up? Join us February 17th and 18th at Melbourne’s Pullman on the park for Wind energy o and M Australia 2026, where you’ll connect with the experts solving real problems in maintenance asset management. And OEM relations. Walk away with practical strategies to cut costs and boost uptime that you can use the moment you’re back on site. Register now at WMA 2020 six.com. Wind Energy o and m Australia is created by wind professionals for wind professionals because this industry needs solutions, not speeches, Speaker 2: as we all know. On December 22nd, the federal government issued a stop work order. On all offshore winds that included vineyard wind up off the coast of Massachusetts, that’s a 62 turbine, $4.5 billion wind farm. Uh, that’s being powered by some GE turbines. Uh, the government [00:13:00] has, uh, cited national security concerns, but vineyard went to court and Federal Judge Brian Murphy rolled the, the administration failed to adequately explain or justify the decision to shut it down. Uh, the judge issued a stay, which it is allowing Vineyard went to immediately resume work on the project now. They’re close to being finished at a vineyard. There are 44 turbines that are up and running right now and creating power and delivering power on shore. There are 17 that are partially installed. Uh, when the stop order came. The biggest issue at the moment, if they can’t get rolling again, there are 10 towers with Noels on them, what they call hammerheads. That don’t have blades. And, uh, the vineyard wind. Last week as we were recording this, said you really don’t want hammerheads out in the water because they become a risk. They’re not assembled, completed [00:14:00] items. So lightning strikes and other things could happen, and you really don’t want them to be that way. You want to finish those turbines, so now they have an opportunity to do it. The window’s gonna be short. And Yolanda listening to some GE discussions, they were announcing their Q4 results from last year. The ships are available till about the end of March, and then the ships are gonna finally go away and go work on another project. So they have about 45 days to get these turbines done. I guess my question is, can they get it done work-wise? And I, I, I guess the, the issue is they gotta get the turbines running and if they do maintenance on it, that’s gonna be okay. So I’m wondering what they do with blade sets. Do they have a, a set of blades that are, maybe they pass QC but they would like them to be better? Do they install ’em just to get a turbine operational even temporarily to get this project quote unquote completed so they can get paid? Yolanda Padron: Yeah. If, if the risk is low, low [00:15:00] enough, it, it should be. I mean a little bit tight, but what, what else can you do? Right? I mean, the vessel, like you might have a shot of getting the vessel back eventually, or being able to get something in so you can do some of the blade repairs. And the blade repairs of tower would require a different vessel than like bringing in a whole blade, right? And so just. You have a very limited time scope to be able to do everything. So I don’t know that I would risk just not being able to pull this off altogether and just risk the, you know, the rest of the tower by not having a complete, you know, LPS and everything on there just because not everything’s a hundred percent perfect. Joel Saxum: There’s a weird mix in technical and commercial risk here, right? Because. Technically, we have these hammerheads out there, right? There’s a million things that can happen with those. Like I, I’ve [00:16:00] personally done RCAs where, um, you have a hammerhead on this was onshore, right? But they, they will get, um, what’s called, uh, Viv, uh, vortex induced vibration. So when they don’t have the full components out there, wind will go by and they’ll start to shake these things. I’ve seen it where they shook them so much because they’re not designed to be up there like that. They shook them so much that like the bolts started loosening and concrete started cracking in the foundations and like it destroyed the cable systems inside the tower ’cause they sat there and vibrated so violently. So like that kind of stuff is a possibility if you don’t have the right, you know. Viv protection on and those kind of things, let alone lightning risk and some other things. So you have this technical risk of them sitting out there like that. But you also have the commercial risk, right? Because the, the banks, the financiers, the insurance companies, there’s the construction policies and there’s, there’s, you gotta hit these certain timelines or it’s just like if you’re building a house, right? You’re building a house, you have to go by the loan that the bank gives you in, you know, in micro [00:17:00] terms to kind of think about that. That’s the same thing that happens with this project, except for this project’s four and a half billion dollars and probably has. It’s 6, 8, 10 banks involved in it. Right? So you have a lot of, there’s a lot of commercial risk. If you don’t, if you don’t move forward when you have the opportunity to, they won’t, they’ll frown on that. Right? But then you have to balance the technical side. So, so looking at the project as a whole, you’ve got 62 turbines, 44 or fully operational. So that leaves us with 18 that are not. Of those 18, you said Alan? 10 needed blades. Speaker 2: 10 need blades, and one still needs to be erected. Joel Saxum: Okay, so what’s the other seven? Speaker 2: They’re partially installed, so they, they haven’t completed the turbine, so everything’s put together, but they haven’t powered them up yet. Joel Saxum: I was told that. Basically with the kit that they have out of vineyard wind, that they can do one turbine a day blades. Speaker 2: That would be, yeah, that would make sense to me. Joel Saxum: But, but you also have to, you have 45 days of vessel time left. You said they’re gonna leave in March, but you also gotta think it’s fricking winter in. The, [00:18:00] in the Atlantic Speaker 2: they are using jackass. However, there’s big snow storms and, and low uh, pressure storms that are rolling through just that area. ’cause they, they’ve kind of come to the Midwest and then shoot up the east coast. That’s where you see New York City with a lot of snow. Boston had a lot of snow just recently. They’re supposed to get another storm like that. And then once it hits Boston, it kind of hits the water, which is where vineyard is. So turbulent water for sure. Super cold this time of year out there, Joel Saxum: but wind, you can’t sling blades in, in probably more than what, six meters per second’s? Probably your cutoff. Speaker 2: Yeah. This is not the best time of year to be putting blade sets up offshore us. Joel Saxum: Technically, if you had blue skies, yeah, this thing can get done and we can move. But with weather risk added in you, you’ve got, there’s some wild cards there. Speaker 2: I It’s gonna be close. Joel Saxum: Yeah. If we looked at the, the weather, it looks like even, I think this coming weekend now we’re recording in January here, and [00:19:00] this weekend’s, first week in February coming, there’s supposed to be another storm rolling up through there too. Speaker 2: It was pretty typical having lived in Massachusetts almost 25 years. It will be stormy until April. So we’re talking about the time span of which GE and Vineyard want to be done. That’s a rough period for snow. And as historically, uh, that timeframe is also when nor’easters happened, where the storms just sit there and cyclone off the shore around vineyard and then dump the snow back on land. Those storms are really violent and there’s no way they’re gonna be hanging. Anything out in the water, so I think it’s gonna be close. They’re gonna have to hope for good weather. Don’t let blade damage catch you off guard. OGs, ping sensors detect issues before they become expensive, time consuming problems from ice buildup and lightning strikes to pitch misalignment and internal blade cracks. OGs Ping has you covered The cutting edge sensors are easy to install, giving you [00:20:00] the power to stop damage before it’s too late. Visit eLog ping.com and take control of your turbine’s health today. So while GE Ver Nova celebrated strong results in its Q4 report, in both its energy and electrification business, the company’s wind division told a different story. In the fourth quarter of 2025, wind revenue fell 24% to $2.37 billion. Uh, driven primarily by offshore wind struggles, vineyard, wind, uh. The company recorded approximately $600 million in win losses for the full year up from earlier expectations of about $400 million. That’s what I remember from last summer. Uh, the, the culprit was. All vineyard wind, they gotta get this project done. And with this work stoppages, it just keeps dragging it on and on and on. And I know GE has really wanted to wrap that up as [00:21:00] fast as they can. Uh, CEO Scott Straza has said the company delivered strong financial results, which they clearly have because they’re gas turbine business is taking orders out to roughly 2035, and I think the number on the back order was gonna be somewhere in the realm of 150 billion. Dollars, which is an astronomical number for back orders. And because they had the back orders that far out, they’re raising prices which improves margins, which makes everybody on the stock market happy. You would think, Joel? Except after the, the Q4 results today, GE Renovo stock is really flat, Joel Saxum: which is an odd thing, right? I talk about it all the time. Um, I’m always thinking they’re gonna drop and they go up and they go up and they go up. But today was just kind of like a, I don’t know how to take it. Yeah. And I don’t know if it’s a, a broader sentiment across what the market was doing today because there was some other tech earnings and things of that sort, but it’s always something to watch, right? So. Uh, there, [00:22:00] there’s some interesting stuff going on on in the GE world, but one thing I want to touch on here, we’re talking like vineyard wind caused them this, these delays right there is a, a, a larger call to understand why there was these delays and because it’s causing. Havoc across the industry. Right. But even the, like, a lot of like, uh, conservative lawmakers, like there were some senators and stuff coming out saying like, we need more transparency to understand these 90 day halts because of what it’s doing to the industry, right? Because to date there hasn’t been really any explanation and the judges have been just kind of throwing ’em out. Um, but you can see what it’s done here to ge. Recording $600 million in win losses. I mean, and that is mostly all vineyard wind, right? But there’s a little bit of Dogger bank stuff in there. I would imagine Speaker 2: a tiny bit. Really? ’cause Dogger has been a lot less stressful to ge. Joel Saxum: But it is, yeah. The, the uncertainty of the market. And that’s why we kind of said a little bit, I said a little bit ago, like when this thing is done, when Vineyard [00:23:00] Point is like, and when you can put the final nail in the coffin of construction on that, it is gonna be agh sigh of relief over at GEs offices For sure. Speaker 2: Our friend Alina, Hal Stern appeared in Energy Watch this week and she’s spent a long time in the wind industry. She’s been in it 25 years, and, uh, she commented that she’s seeing some troubling things. Uh, she’s also the new CEO of Wind Power Lab over in Denmark, and they’re a consultancy firm on wind turbines and particularly blades. Uh, Lena says that she’s watched some. Really significant manufacturing errors in operational defects and wind turbine blades become more frequent. And in 2025 alone, Windpower lab analyzed and provided repair recommendations for over 700 blades globally. And I assume, or Blade Whisperer Morton Hamburg was involved in a number of those. Uh, the problem she says is that the market eagerly, uh, [00:24:00] demanded cheap turbines, which is true. And, uh. Everything had to be done faster and with lower costs, and you end up with a product that reflects that. Uh, we’ve had Lena on a podcast a couple of times, super smart. Uh, she’s great to talk to, get offline and understand what’s happening behind the scenes. And, uh, in some of these conference rooms between asset managers, operators, and OEMs, those are sometimes tough. Discussions, but I, I think Lena’s pointing out something that I, the industry has been trying to deal with and she’s raising it up sort of to a higher level because she has that weight to do that. We have some issues with blades that we need to figure out pretty quickly. And Yolanda, you ran, uh, a large, uh, operator in the United States. We’re dealing with more than a thousand turbines. How locked in is Lena, uh, to [00:25:00]some of these issues? And are they purely driven just by the push to lower the cost of the blades or was it more of a speed issue that they making a longer blades in the same amount of time? Where’s that balance and, and what are we going to do about it going forward as we continue to make larger turbines? Yolanda Padron: She’s great with, with her point, and I think it’s. A little bit about the, or equally about the OEMs maybe not being aware of these issues as much, or not having the, the bandwidth to take care of these issues with limited staff and just a lot of the people who are charge of developing and constructing these projects at a very short amount of time, or at least with having to wear so many hats that they. Don’t necessarily have the, the bandwidth to do a deep dive on what the potential risks could be in [00:26:00] operations. And so I think the way I’ve, I’ve seen it, I’ve experienced it. It’s almost like everybody’s running a marathon. Their shoe laces untied, so they trip and then they just kind of keep on running ’cause you’re behind, ’cause you tripped. And so it just keeps on, it’s, it’s, it’s a vicious cycle. Um. But, uh, we’ve also seen just, just in our time together and everything, that there’s a lot of people that are noticing this and that are taking the time to just pause, you know, tie those releases and just talk to each other a little bit more of, Hey, I’m the one engineer doing this for so many turbines. You have these turbines too. Are you seeing this issue? Yes. No. Are, how are you tackling it? How have you tackled it in the past? How can we work together to, to use the data we have? Right? That, I mean, if you’re not going to get a really great answer from your OEMs or if you’re not going to get a lot of [00:27:00] easily available answers just from the dataset that you’re seeing from your turbine, it’s really easy now to to reach out to other people within the industry and to be able to talk it over, which I think is something that Lena. Is definitely encouraging here. Joel Saxum: Yeah. Yeah. It’s, I mean, she, she makes a statement about owners needing to be technically mature, ensure you have inspections, get your TSAs right. So these are, again, it’s lessons learned. It’s sharing knowledge within the market because at the end of the day, this is a new, not a new reality. This is the reality we’re living in. Right. It’s not new. Um, but, but we’re getting better at it. I think that’s the, the important thing here, right? From a, from a. If we take a, the collective group of operators in the world and say like, you know, where were you two, three years ago and where are you today? I think we’re in a much better place, and that’s from knowledge sharing and, and understanding these issues. And, you know, we’re, we’re at the behest of, uh, good, fast, cheap pick. [00:28:00] Right. And so that’s got us where we are today. But now we’re, we’re starting to get best practices, lessons learned, fix things for the next go around. And you’re seeing efforts at the OEM level as well to, uh, and some, some of these consultants coming out, um, to, to try to fix some of these manufacturing issues. You know, Alan, you and I have talked with DFS composites with Gulf Wind Technology. Like there, there’s things here that we could possibly fix. You’re starting to see operators do. Internal inspections to the blades on the ground before they fly them. That’s huge. Right? That’s been the Wind Power lab has been talking about that since 2021. Right. But the message is finally getting out to the industry of this is what you should be doing as a best practice to, you know, de-risk. ’cause that’s the whole thing. You de-risk, de-risk, de-risk. Uh, so I think. Lena’s spot on, right? We know that this, these things are happening. We’re working with the OEMs to do them, but it takes them a technically mature operator. And if you’re, if you don’t have the staff to be technically mature, go grab a consultant, [00:29:00] go grab someone that is to help you out. I think that’s a, that’s an important, uh, thing to take from this as well. Those people are out there, those groups are out there, so go and go in, enlist that to make sure you’re de-risking this thing, because at the end of the day, if we’re de-risking turbines. It’s better for the whole industry. Speaker 2: Yeah. You want to grab somebody that has seen a lot of blades, not a sole consultant on a particular turbine mine. You’re talking about at this point in the development of the wind industry, you’re talking about wind power labs, sky specs kind of companies that have seen thousands of turbines and have a broad reach where they’ve done things globally, just not in Scandinavia or the US or Australia or somewhere else. They’ve, they’ve seen problems worldwide. Those people exist, and I, I don’t think we as an industry use them as much as we could, but it would get to the solutions faster because having seen so many global [00:30:00] issues with the St turbine, the solution set does vary depending on where you are. But it’s been proven out already. So even though you as an asset manager. May have never heard of this technique to make your performance better. You make your blades last longer. It’s probably been done at this point, unless it’s a brand new turbine. So a lot of the two x machines and three X machines, and now we’re talking about six X machines. There’s answers out there, but you’re gonna have to reach out to somebody who has a global reach. We’ve grown too big to do it small anymore, Yolanda Padron: which really should be a relief to. All of the asset managers and operations people and everything out there, right? Like. You don’t have to use your turbines as Guinea pigs anymore. You don’t have to struggle with this. Speaker 2: That wraps up another episode of the Uptime Wind Energy Podcast, and if today’s discussion sparked any questions or ideas, we’d love to hear from you. Reach out to us on LinkedIn and don’t forget to subscribe so you never miss an episode. [00:31:00] And if you found value in today’s conversation, please leave us a review. It really helps other wind energy professionals discover the show for Rosie, Yolanda and Joel. I am Alan Hall, and we’ll see you here next week on the Uptime Wind Energy Podcast.
Aprovechando el cambio de NAS, he modificado la forma en cómo levanto los contenedores Docker de los distintos servicios auto-alojados. Ya no lo tengo todo en un único archivo docker-compose.yaml, porque ahora uso Dockge para organizarlo todo mucho mejor. Te cuento para qué vale esta herramienta y cómo lo tengo yo todo dividido ahora.
This instalment is the second half of PBS Tidbit 17 in which Helma van der Linden is the instructor and Bart Busschots is the student. We pick up the plot right where Helma begins to teach how to reuse the Docker image created in the first half of the lesson. You can find Helma's fabulous tutorial shownotes and the audio podcast at pbs.bartificer.net Join the Conversation: allison@podfeet.com podfeet.com/slack Support the Show: Patreon Donation Apple Pay or Credit Card one-time donation PayPal one-time donation Podfeet Podcasts Mugs at Zazzle NosillaCast 20th Anniversary Shirts Referral Links: Setapp - 1 month free for you and me PETLIBRO - 30% off for you and me Parallels Toolbox - 3 months free for you and me Learn through MacSparky Field Guides - 15% off for you and me Backblaze - One free month for me and you Eufy - $40 for me if you spend $200. Sadly nothing in it for you. PIA VPN - One month added to Paid Accounts for both of us CleanShot X - Earns me $25%, sorry nothing in it for you but my gratitude
CasaOS se queda sin servicio y quería migrar mis servicios docker a otra plataforma pero 1) manteniendo por debajo Ubuntu Server y 2) garantizandome que no iba a perder una coma de lo configurado sobre todo en mi plex o en crafty (mi server Minecraft).Pero en vez de pedir ayuda a un linuxero (y tengo acceso a Lorenzo Atareao) pues me decidí por el DIY (hazlo tu mismo) con la ayuda de Gemini.Os cuento el proceso y el resultado.Este podcast está asociado a la red de Sospechosos Habituales donde podréis encontrar otros muchos podcast de diferentes temáticas.
В этом выпуске: пьем кофе с таурином, ищем side-channel уязвимости в Docker, заменяем Rust на Python, пристаем с глупыми вопросами к Gemini, а также обсуждаем темы слушателей. [00:00:00] Чему мы научились за неделю [00:19:48] Кофе и нервозность/раздрожительность [00:42:47] Fun-reliable side-channels for cross-container communication [00:57:03] Как Rust проиграл по скорости Python https://eax.me/2026/2026-01-23-rust-vs-python.html https://www.reddit.com/r/Python/comments/1dv811q/flpc_probably_the_fastest_regex_library_for/ https://docs.rs/plotters/latest/plotters/ https://github.com/pola-rs/polars [01:11:08]… Читать далее →
¿Sigues usando Docker Compose para todo? Es hora de descubrir la verdadera potencia de Podman: los Pods. En este episodio te acompaño en la migración de un stack completo de WordPress, MariaDB y Redis para que veas cómo simplificar radicalmente la gestión de tus contenedores.Aprenderás por qué el concepto de "vaina" (Pod) cambia las reglas del juego al permitir que tus contenedores compartan la misma red y dirección IP, atacando directamente a localhost. Veremos desde el funcionamiento técnico del contenedor Infra hasta la automatización profesional con Quadlet y systemd.¿Qué es un Pod?: El origen del nombre y por qué es la unidad lógica ideal para tus servicios.Adiós a los problemas de red: Cómo conectar WordPress y base de datos sin crear redes virtuales, usando simplemente 127.0.0.1.Seguridad y Sidecars: Blindar servicios como Redis dentro de la misma vaina para que sean inaccesibles desde el exterior.Gestión unificada: Cómo detener, arrancar y monitorizar todo tu stack con un solo comando.Persistencia y automatización: Generar archivos YAML de Kubernetes y convertirlos en servicios nativos de Linux con archivos .kube.Si buscas soluciones prácticas para "cualquier cosa que quieras hacer con Linux", este episodio te da las herramientas para profesionalizar tu infraestructura.Notas completas y comandos utilizados: https://atareao.es/podcast/766
This very special episode of Programming By Stealth is a Tidbit written and taught by the lovely Helma van der Linden. Bart has wanted to understand Docker better, and Helma has some great use cases for how to use them for developer setups so it was a good opportunity for Bart to learn from Helma. The material is quite long, so the podcast was recorded in two segments, Tidbit 17a and b. Tidbit b will be along shortly, and picks up and the heading entitled "Reusing the Docker image". You can find Helma's fabulous tutorial shownotes and the audio podcast at pbs.bartificer.net Join the Conversation: allison@podfeet.com podfeet.com/slack Support the Show: Patreon Donation Apple Pay or Credit Card one-time donation PayPal one-time donation Podfeet Podcasts Mugs at Zazzle NosillaCast 20th Anniversary Shirts Referral Links: Setapp - 1 month free for you and me PETLIBRO - 30% off for you and me Parallels Toolbox - 3 months free for you and me Learn through MacSparky Field Guides - 15% off for you and me Backblaze - One free month for me and you Eufy - $40 for me if you spend $200. Sadly nothing in it for you. PIA VPN - One month added to Paid Accounts for both of us CleanShot X - Earns me $25%, sorry nothing in it for you but my gratitude
Welcome to episode 340 of The Cloud Pod, where the forecast is always cloudy! It's a full house (eventually) with Justin, Jonathan, Ryan, and Matt all on board for today's episode. We've got a lot of announcements, from Gemini for Gov (no more CamoGPT!) to Route 52 and Claude. Let's get started! Titles we almost went with this week Claude’s Pricing Tiers: Free, Pro, and Maximum Overdrive GitHub Copilot Learns Database Schema: Finally an AI That Understands Your Joins SSMS Gets a Copilot: Your T-SQL Now Writes Itself While You Grab Coffee Too Many Cooks in the Cloud Kitchen: How 32 GPUs Outcooked the Big Tech Industrial Kitchens Uncle Sam Gets a Gemini Twin: Google’s AI Goes Federal Route 53 Gets Domain of Its Own: .ai Joins the Party Thai One On: Google Cloud Plants Its Flag in Bangkok NAT So Fast: Azure’s Gateway Gets a V2 Glow-Up Beware Azure's SQL Assistant doesn't smoke your joints. AI Is Going Great, Or How ML Makes Money 30:10 Announcing BlackIce: A Containerized Red Teaming Toolkit for AI Security Testing | Databricks Blog Databricks released BlackIce, an open-source containerized toolkit that bundles 14 AI security testing tools into a single Docker image available on Docker Hub as databricksruntime/blackice:17.3-LTS. The toolkit addresses common red teaming challenges, including conflicting dependencies, complex setup requirements, and the fragmented landscape of AI security tools, by providing a unified command-line interface similar to how Kali Linux works for traditional penetration testing. The toolkit includes tools covering three main categories: Responsible AI, Security testing, and classical adversarial ML, with capabilities mapped to MITRE ATLAS and the Databricks AI Security Framework. Tools are organized as either static (simple CLI-based with minimal programming needed) or dynamic (Python-based with customization options), with static tools isolated in separate virtual environments and dynamic tools in a global environment with managed dependencies. BlackIce integrates directly with Databricks Model Serving endpoints through custom patches applied to several tools, allowing security teams to test for vulnerabilities like prompt injections, data leakage, hallucination detection, jailbreak attacks, and supply chain security issues. Users can deploy it via Databricks Container Services by specifying the Docker image URL when creating compute clusters. The release includes a demo notebook showing how to orchestrate multiple security tools in a single environment, with all build artifacts, tool documentation, and examples available in the GitHub repository. The CAMLIS Red Paper provides additional technical details on tool selection criteria and the Docker image architecture. 04:30 Ryan – “It's very difficult to feel confident in your AI security practice or patterns. I feel like it's just bleeding edge, and I
When your self-hosted services become infrastructure, breakage matters. We tackle monitoring that actually helps, alerts you won't ignore, and DNS for local, and multi-mesh network setups.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:
This week, we discuss the end of Cloud 1.0, AI agents fixing old apps, and Chainguard vs. Docker images. Plus, the mystery of Dutch broth is finally solved. Watch the YouTube Live Recording of Episode 556 Runner-up Titles His overall deal Been there and done that been ignoring that shift key for years Cloud is just fine I'll be back in Bartertown The “F” Word Hardened-washing We'll never do this, but we should check back in in 3 months Libraries are the best Elves don't belong in space Rundown Are we at the end of cloud or cloud 1.0 It's the beginning of Cloud 2.0 Spec-driven development system for Claude Code Anthropic and App Modernization A meta-prompting, context engineering and spec-driven development What comes next, if Claude Code is as good as people say. Microsoft Spending on Anthropic Approaches $500 Million a Year Claude Code Won't Fix Your Life Coté and Tony contemplate day two AI-generated apps, and an excerpt. Why We've Tried to Replace Developers Every Decade Since 1969 Well, that escalated quickly: Zero CVEs, lots of vendors Relevant to your Interests Beijing tells Chinese firms to stop using US and Israeli cybersecurity software China blacklists VMware, Palo Alto Networks software over national security fears Kroger taps Google Gemini, announces more key AI moves Texas judge throws out second lawsuit over CrowdStrike outage Apple will pay billions for Gemini after OpenAI declined Dell wants £10m+ from VMware if Tesco case goes against it Tailscale: The Best Free App Most Mac Power Users Aren't Using How WhatsApp Took Over the Global Conversation Our approach to advertising and expanding access to ChatGPT OpenAI's ARR reached over $20 billion in 2025, CFO says Simon Willison's take on Our approach to advertising and ChatGPT The AI lab revolving door spins ever faster | TechCrunch How Markdown took over the world An Interview with United CEO Scott Kirby About Tech Transformation Conferences cfgmgmtcamp 2026, February 2nd to 4th, Ghent, BE. Coté speaking - anyone interested in being an SDI guest? DevOpsDayLA at SCALE23x, March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026, March 4th to 6th, Atlanta, GA. Use this 30% off discount code from your pals at Tanzu: DN26VMWARE30. KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. VMware User Groups (VMUGs): Amsterdam (March 17-19, 2026) Minneapolis (April 7-9, 2026) Toronto (May 12-14, 2026) Dallas (June 9-11, 2026) Orlando (October 20-22, 2026) SDT News & Community Join our Slack community Email the show: questions@softwaredefinedtalk.com Free stickers: Email your address to stickers@softwaredefinedtalk.com Follow us on social media: Twitter, Threads, Mastodon, LinkedIn, BlueSky Watch us on: Twitch, YouTube, Instagram, TikTok Book offer: Use code SDT for $20 off "Digital WTF" by Coté Sponsor the show Recommendations Brandon: The Library will loan you a 5G hotspot Matt: Deep Rock Galactic: Survivor (rogue-like Vampire Hunters-type game) Coté: Streamyard shorts generation. Salesforce was inspired by dolphins.
I talk with David Flanagan, aka Rawkode, about his new opinionated Tech Matrix that helps you navigate the overwhelming CNCF landscape. https://rawkode.academy/technology/matrix
Sandboxing is having a moment. As agents move from chat windows into terminals, repos, and production-adjacent workflows, the question is no longer “What can AI generate?” but “Where can it safely run?” In this episode, Scott talks with Mark Cavage, President of Docker, about the resurgence of sandboxes as critical infrastructure for the agent era and the thinking behind Docker's newly released sandbox feature.They explore why isolation, reproducibility, and least-privilege execution are becoming table stakes for AI-assisted development. From protecting local machines to enabling trustworthy automation loops, Scott and Mark dig into how modern sandboxes differ from traditional containers, what developers should expect from secure agent runtimes, and why the future of “AI that does things” will depend as much on boundaries as it does on model capability.
In this episode of Tank Talks, Matt Cohen sits down with Timothy Chen, the sole General Partner at Essence VC. Tim shares his remarkable journey from being a “nerdy, geeky kid” who hacked open-source projects to becoming one of the most respected early-stage infrastructure investors, backing breakout companies like Tabular (acquired by Databricks for $2.2 billion). A former engineer at Microsoft and VMware, co-founder of Hyperpilot (acquired by Cloudera), and now a solo GP who quietly raised over $41 million for his latest fund, Tim offers a unique, no-BS perspective on spotting technical founders, navigating the idea maze, and rethinking sales and traction in the world of AI and infrastructure.We dive deep into his unconventional path into VC, rejected by traditional Sand Hill Road firms, only to build a powerhouse reputation through sheer technical credibility and founder empathy. Tim reveals the patterns behind disruptive infra companies, why most VCs can't help with product-market fit, and how he leverages his engineering background to win competitive deals.Whether you're a founder building the next foundational layer or an investor trying to understand the infra and AI boom, this conversation is packed with hard-won insights.The Open Source Resume (00:03:44)* How contributing to Apache projects (Drill, Cloud Foundry) built his career when a CS degree couldn't.* The moment he realized open source was a path to industry influence, not just a hobby.* Why the open source model is more “vertical than horizontal”, allowing deep contribution without corporate red tape.From Engineer to Founder: The Hyperpilot Journey (00:13:24)* Leaving Docker to start Hyperpilot and raising seed funding from NEA and Bessemer.* The harsh reality of founder responsibility: “It's not about the effort hard, it's about all the other things that has to go right.”* Learning from being “way too early to market” and the acquisition by Cloudera.The Unlikely Path into Venture Capital (00:26:07)* Rejected by top-tier VC firms for a job, then prompted to start his own fund via AngelList.* Starting with a $1M “Tim Chen Angel Fund” focused solely on infrastructure.* How Bain Capital's small anchor investment gave him the initial credibility.Building a Brand Through Focus & Reputation (00:30:42)* Why focusing exclusively on infrastructure was his “best blessing” creating a standout identity in a sparse field.* The reputation flywheel: Founders praising his help led to introductions from top-tier GPs and LPs.* StepStone reaching out for a commitment before he even had fund documents ready.The Essence VC Investment Philosophy (00:44:34)* Pattern Recognition: What he learned from witnessing the early days of Confluent, Databricks, and Docker.* Seeking Disruptors, Not Incrementalists: Backing founders who have a “non-common belief” that leads to a 10x better product (e.g., Modal Labs, Cursor, Warp).* Rethinking Sales & Traction: Why revenue-first playbooks don't apply in early-stage infra; comfort comes from technical co-building and roadmap planning.* The “Superpower”: Using his engineering background to pressure-test technical assumptions and timelines with founders.The Future of Infra & AI (00:52:09)* Infrastructure as an “enabler” for new application paradigms (real-time video, multimodal apps).* The coming democratization of building complex systems (the “next Netflix” built by smaller teams).* The shift from generalist backend engineers to specialists, enabled by new stacks and AI.Solo GP Life & Staying Relevant (00:54:55)* Why being a solo GP doesn't mean being a lone wolf; 20-30% of his time is spent syncing with other investors to learn.* The importance of continuous learning and adaptation in a fast-moving tech landscape.* His toolkit: Using portfolio company Clerky (a CRM) to manage workflow.About Timothy ChenFounder and Sole General Partner, Essence VCTimothy Chen is the Sole General Partner at Essence VC, a fund focused on early-stage infrastructure, AI, and open-source innovation. A three-time founder with an exit, his journey from Microsoft engineer to sought-after investor is a masterclass in building credibility through technical depth and founder-centric support. He has backed companies like Tabular, Iteratively, and Warp, and his insights are shaped by hundreds of conversations at the bleeding edge of infrastructure.Connect with Timothy Chen on LinkedIn: linkedin.com/in/timchenVisit the Essence VC Website: https://www.essencevc.fund/Connect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com
¿Sabías que ejecutar Docker como root es, en esencia, darle las llaves de tu casa a cualquier proceso que corra en un contenedor? En este episodio exploramos el modo Rootless, una configuración que aumenta drásticamente la seguridad de tus servidores al eliminar la necesidad de privilegios de administrador.El Experimento del Escape: Te explico cómo un atacante puede usar el socket de Docker para "escapar" del contenedor y tomar el control total de tu host.User Namespaces desmitificados: Entenderás cómo Linux crea una "realidad paralela" donde eres root dentro del contenedor, pero un usuario normal y corriente fuera de él.La magia de subuid y subgid: Cómo el sistema gestiona los rangos de IDs para que tus procesos vivan en su propia burbuja de seguridad.Podman vs Docker: Por qué Podman detiene ataques de escalada de privilegios de forma nativa gracias a su arquitectura sin demonio y sin root.Guía de implementación: Pasos clave para configurar este entorno en Ubuntu y Arch Linux, incluyendo la gestión de almacenamiento y redes.El truco de los puertos bajos: Cómo permitir que tu usuario use los puertos 80 y 443 sin trucos complejos de firewall, usando simplemente sysctl.Si te preocupa la seguridad de tus servicios autohospedados (self-hosted) o quieres llevar tu gestión de contenedores al siguiente nivel de profesionalismo, este episodio es para ti.Notas completas y comandos: Visita atareao.es para ver todos los scripts y configuraciones mencionadas.YouTube: Busca "atareao" para ver la demostración visual del ataque de escape.Proyecto: Sigue el podcast para más soluciones sobre cómo optimizar tus sistemas Linux.Más información y enlaces en las notas del episodio
¿Es el fin de una era en mi infraestructura? En este episodio comienzo una de las tareas más importantes de esta temporada: la migración definitiva de Docker a Podman. Si alguna vez te ha preocupado la seguridad de tu servidor o el consumo descontrolado de recursos, este programa es para ti.Históricamente, Docker ha sido la herramienta de referencia, pero su arquitectura basada en un demonio persistente (dockerd) que corre como root presenta riesgos y puntos únicos de fallo que he decidido dejar atrás. Podman llega para solucionar esto de forma elegante, integrándose orgánicamente en Linux y eliminando la necesidad de privilegios de administrador.En este episodio exploramos:El problema del "Gran Hermano": Por qué depender de un proceso centralizado que, si falla, detiene todos tus servicios.Procesos independientes: Cómo Podman lanza contenedores que son procesos hijos directos, mejorando la fiabilidad.Seguridad Rootless: La importancia de ejecutar contenedores sin privilegios de root para reducir drásticamente la superficie de ataque.Transición transparente: Cómo aprovechar lo que ya sabes de Docker usando un simple alias.Gestión de Logs y Registries: La integración nativa con journald y la libertad de configurar múltiples registros de imágenes.El concepto de Pods: Una herramienta clave para agrupar contenedores que comparten red y recursos.Esta es solo la primera entrega de una serie donde profundizaremos en ejemplos prácticos para que tú también puedas optimizar tu sistema Linux.Capítulos00:00:00 Introducción y el reto de migrar a Podman00:01:25 Adiós al demonio: El problema de la arquitectura de Docker00:02:33 Procesos independientes y aislamiento en Podman00:03:49 Integración nativa con Systemd y el modelo estándar de Linux00:04:41 Seguridad: Contenedores sin privilegios de administrador (Rootless)00:06:10 Una transición indolora: El alias de Docker a Podman00:07:33 Registries y gestión de imágenes personalizadas00:09:36 Gestión eficiente de logs y Journald00:11:20 Los Pods: Agrupación de contenedores y comunicación por localhost00:14:51 El patrón Sidecar para añadir funcionalidades00:15:40 Introducción a los Quadlets y futuro de la serie00:17:36 Conclusión y despedida¿Te ha resultado útil? Si te gusta el contenido práctico sobre Linux y el código abierto, no olvides seguir el podcast y dejar una valoración de 5 estrellas en Spotify. ¡Nos escuchamos el próximo lunes!Más información y enlaces en las notas del episodio
De retour à cinq dans l'épisode, les cast codeurs démarrent cette année avec un gros épisode pleins de news et d'articles de fond. IA bien sûr, son impact sur les pratiques, Mockito qui tourne un page, du CSS (et oui), sur le (non) mapping d'APIs REST en MCP et d'une palanquée d'outils pour vous. Enregistré le 9 janvier 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-335.mp3 ou en vidéo sur YouTube. News Langages 2026 sera-t'elle l'année de Java dans le terminal ? (j'ai ouïe dire que ça se pourrait bien…) https://xam.dk/blog/lets-make-2026-the-year-of-java-in-the-terminal/ 2026: Année de Java dans le terminal, pour rattraper son retard sur Python, Rust, Go et Node.js. Java est sous-estimé pour les applications CLI et les TUIs (interfaces utilisateur terminales) malgré ses capacités. Les anciennes excuses (démarrage lent, outillage lourd, verbosité, distribution complexe) sont obsolètes grâce aux avancées récentes : GraalVM Native Image pour un démarrage en millisecondes. JBang pour l'exécution simplifiée de scripts Java (fichiers uniques, dépendances) et de JARs. JReleaser pour l'automatisation de la distribution multi-plateforme (Homebrew, SDKMAN, Docker, images natives). Project Loom pour la concurrence facile avec les threads virtuels. PicoCLI pour la gestion des arguments. Le potentiel va au-delà des scripts : création de TUIs complètes et esthétiques (ex: dashboards, gestionnaires de fichiers, assistants IA). Excuses caduques : démarrage rapide (GraalVM), légèreté (JBang), distribution simple (JReleaser), concurrence (Loom). Potentiel : créer des applications TUI riches et esthétiques. Sortie de Ruby 4.0.0 https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/ Ruby Box (expérimental) : Une nouvelle fonctionnalité permettant d'isoler les définitions (classes, modules, monkey patches) dans des boîtes séparées pour éviter les conflits globaux. ZJIT : Un nouveau compilateur JIT de nouvelle génération développé en Rust, visant à surpasser YJIT à terme (actuellement en phase expérimentale). Améliorations de Ractor : Introduction de Ractor::Port pour une meilleure communication entre Ractors et optimisation des structures internes pour réduire les contentions de verrou global. Changements syntaxiques : Les opérateurs logiques (||, &&, and, or) en début de ligne permettent désormais de continuer la ligne précédente, facilitant le style "fluent". Classes Core : Set et Pathname deviennent des classes intégrées (Core) au lieu d'être dans la bibliothèque standard. Diagnostics améliorés : Les erreurs d'arguments (ArgumentError) affichent désormais des extraits de code pour l'appelant ET la définition de la méthode. Performances : Optimisation de Class#new, accès plus rapide aux variables d'instance et améliorations significatives du ramasse-miettes (GC). Nettoyage : Suppression de comportements obsolètes (comme la création de processus via IO.open avec |) et mise à jour vers Unicode 17.0. Librairies Introduction pour créer une appli multi-tenant avec Quarkus et http://nip.io|nip.io https://www.the-main-thread.com/p/quarkus-multi-tenant-api-nipio-tutorial Construction d'une API REST multi-tenant en Quarkus avec isolation par sous-domaine Utilisation de http://nip.io|nip.io pour la résolution DNS automatique sans configuration locale Extraction du tenant depuis l'en-tête HTTP Host via un filtre JAX-RS Contexte tenant géré avec CDI en scope Request pour l'isolation des données Service applicatif gérant des données spécifiques par tenant avec Map concurrent Interface web HTML/JS pour visualiser et ajouter des données par tenant Configuration CORS nécessaire pour le développement local Pattern acme.127-0-0-1.nip.io résolu automatiquement vers localhost Code complet disponible sur GitHub avec exemples curl et tests navigateur Base idéale pour prototypage SaaS, tests multi-tenants Hibernate 7.2 avec quelques améliorations intéressantes https://docs.hibernate.org/orm/7.2/whats-new/%7Bhtml-meta-canonical-link%7D read only replica (experimental), crée deux session factories et swap au niveau jdbc si le driver le supporte et custom sinon. On ouvre une session en read only child statelesssession (partage le contexte transactionnel) hibernate vector module ajouter binary, float16 and sparse vectors Le SchemaManager peut resynchroniser les séquences par rapport aux données des tables Regexp dans HQL avec like Nouvelle version de Hibernate with Panache pour Quarkus https://quarkus.io/blog/hibernate-panache-next/ Nouvelle extension expérimentale qui unifie Hibernate ORM with Panache et Hibernate Reactive with Panache Les entités peuvent désormais fonctionner en mode bloquant ou réactif sans changer de type de base Support des sessions sans état (StatelessSession) en plus des entités gérées traditionnelles Intégration de Jakarta Data pour des requêtes type-safe vérifiées à la compilation Les opérations sont définies dans des repositories imbriqués plutôt que des méthodes statiques Possibilité de définir plusieurs repositories pour différents modes d'opération sur une même entité Accès aux différents modes (bloquant/réactif, géré/sans état) via des méthodes de supertype Support des annotations @Find et @HQL pour générer des requêtes type-safe Accès au repository via injection ou via le métamodèle généré Extension disponible dans la branche main, feedback demandé sur Zulip ou GitHub Spring Shell 4.0.0 GA publié - https://spring.io/blog/2025/12/30/spring-shell-4-0-0-ga-released Sortie de la version finale de Spring Shell 4.0.0 disponible sur Maven Central Compatible avec les dernières versions de Spring Framework et Spring Boot Modèle de commandes revu pour simplifier la création d'applications CLI interactives Intégration de jSpecify pour améliorer la sécurité contre les NullPointerException Architecture plus modulaire permettant meilleure personnalisation et extension Documentation et exemples entièrement mis à jour pour faciliter la prise en main Guide de migration vers la v4 disponible sur le wiki du projet Corrections de bugs pour améliorer la stabilité et la fiabilité Permet de créer des applications Java autonomes exécutables avec java -jar ou GraalVM native Approche opinionnée du développement CLI tout en restant flexible pour les besoins spécifiques Une nouvelle version de la librairie qui implémenter des gatherers supplémentaires à ceux du JDK https://github.com/tginsberg/gatherers4j/releases/tag/v0.13.0 gatherers4j v0.13.0. Nouveaux gatherers : uniquelyOccurringBy(), moving/runningMedian(), moving/runningMax/Min(). Changement : les gatherers "moving" incluent désormais par défaut les valeurs partielles (utiliser excludePartialValues() pour désactiver). LangChain4j 1.10.0 https://github.com/langchain4j/langchain4j/releases/tag/1.10.0 Introduction d'un catalogue de modèles pour Anthropic, Gemini, OpenAI et Mistral. Ajout de capacités d'observabilité et de monitoring pour les agents. Support des sorties structurées, des outils avancés et de l'analyse de PDF via URL pour Anthropic. Support des services de transcription pour OpenAI. Possibilité de passer des paramètres de configuration de chat en argument des méthodes. Nouveau garde-fou de modération pour les messages entrants. Support du contenu de raisonnement pour les modèles. Introduction de la recherche hybride. Améliorations du client MCP. Départ du lead de mockito après 10 ans https://github.com/mockito/mockito/issues/3777 Tim van der Lippe, mainteneur majeur de Mockito, annonce son départ pour mars 2026, marquant une décennie de contribution au projet. L'une des raisons principales est l'épuisement lié aux changements récents dans la JVM (JVM 22+) concernant les agents, imposant des contraintes techniques lourdes sans alternative simple proposée par les mainteneurs du JDK. Il pointe du doigt le manque de soutien et la pression exercée sur les bénévoles de l'open source lors de ces transitions technologiques majeures. La complexité croissante pour supporter Kotlin, qui utilise la JVM de manière spécifique, rend la base de code de Mockito plus difficile à maintenir et moins agréable à faire évoluer selon lui. Il exprime une perte de plaisir et préfère désormais consacrer son temps libre à d'autres projets comme Servo, un moteur web écrit en Rust. Une période de transition est prévue jusqu'en mars pour assurer la passation de la maintenance à de nouveaux contributeurs. Infrastructure Le premier intérêt de Kubernetes n'est pas le scaling - https://mcorbin.fr/posts/2025-12-29-kubernetes-scale/ Avant Kubernetes, gérer des applications en production nécessitait de multiples outils complexes (Ansible, Puppet, Chef) avec beaucoup de configuration manuelle Le load balancing se faisait avec HAProxy et Keepalived en actif/passif, nécessitant des mises à jour manuelles de configuration à chaque changement d'instance Le service discovery et les rollouts étaient orchestrés manuellement, instance par instance, sans automatisation de la réconciliation Chaque stack (Java, Python, Ruby) avait sa propre méthode de déploiement, sans standardisation (rpm, deb, tar.gz, jar) La gestion des ressources était manuelle avec souvent une application par machine, créant du gaspillage et complexifiant la maintenance Kubernetes standardise tout en quelques ressources YAML (Deployment, Service, Ingress, ConfigMap, Secret) avec un format déclaratif simple Toutes les fonctionnalités critiques sont intégrées : service discovery, load balancing, scaling, stockage, firewalling, logging, tolérance aux pannes La complexité des centaines de scripts shell et playbooks Ansible maintenus avant était supérieure à celle de Kubernetes Kubernetes devient pertinent dès qu'on commence à reconstruire manuellement ces fonctionnalités, ce qui arrive très rapidement La technologie est flexible et peut gérer aussi bien des applications modernes que des monolithes legacy avec des contraintes spécifiques Mole https://github.com/tw93/Mole Un outil en ligne de commande (CLI) tout-en-un pour nettoyer et optimiser macOS. Combine les fonctionnalités de logiciels populaires comme CleanMyMac, AppCleaner, DaisyDisk et iStat Menus. Analyse et supprime en profondeur les caches, les fichiers logs et les résidus de navigateurs. Désinstallateur intelligent qui retire proprement les applications et leurs fichiers cachés (Launch Agents, préférences). Analyseur d'espace disque interactif pour visualiser l'occupation des fichiers et gérer les documents volumineux. Tableau de bord temps réel (mo status) pour surveiller le CPU, le GPU, la mémoire et le réseau. Fonction de purge spécifique pour les développeurs permettant de supprimer les artefacts de build (node_modules, target, etc.). Intégration possible avec Raycast ou Alfred pour un lancement rapide des commandes. Installation simple via Homebrew ou un script curl. Des images Docker sécurisées pour chaque développeur https://www.docker.com/blog/docker-hardened-images-for-every-developer/ Docker rend ses "Hardened Images" (DHI) gratuites et open source (licence Apache 2.0) pour tous les développeurs. Ces images sont conçues pour être minimales, prêtes pour la production et sécurisées dès le départ afin de lutter contre l'explosion des attaques sur la chaîne logistique logicielle. Elles s'appuient sur des bases familières comme Alpine et Debian, garantissant une compatibilité élevée et une migration facile. Chaque image inclut un SBOM (Software Bill of Materials) complet et vérifiable, ainsi qu'une provenance SLSA de niveau 3 pour une transparence totale. L'utilisation de ces images permet de réduire considérablement le nombre de vulnérabilités (CVE) et la taille des images (jusqu'à 95 % plus petites). Docker étend cette approche sécurisée aux graphiques Helm et aux serveurs MCP (Mongo, Grafana, GitHub, etc.). Des offres commerciales (DHI Enterprise) restent disponibles pour des besoins spécifiques : correctifs critiques sous 7 jours, support FIPS/FedRAMP ou support à cycle de vie étendu (ELS). Un assistant IA expérimental de Docker peut analyser les conteneurs existants pour recommander l'adoption des versions sécurisées correspondantes. L'initiative est soutenue par des partenaires majeurs tels que Google, MongoDB, Snyk et la CNCF. Web La maçonnerie ("masonry") arrive dans la spécification des CSS et commence à être implémentée par les navigateurs https://webkit.org/blog/17660/introducing-css-grid-lanes/ Permet de mettre en colonne des éléments HTML les uns à la suite des autres. D'abord sur la première ligne, et quand la première ligne est remplie, le prochain élément se trouvera dans la colonne où il pourra être le plus haut possible, et ainsi de suite. après la plomberie du middleware, la maçonnerie du front :laughing: Data et Intelligence Artificielle On ne devrait pas faire un mapping 1:1 entre API REST et MCP https://nordicapis.com/why-mcp-shouldnt-wrap-an-api-one-to-one/ Problématique : Envelopper une API telle quelle dans le protocole MCP (Model Context Protocol) est un anti-pattern. Objectif du MCP : Conçu pour les agents d'IA, il doit servir d'interface d'intention, non de miroir d'API. Les agents comprennent les tâches, pas la logique complexe des API (authentification, pagination, orchestration). Conséquences du mappage un-à-un : Confusion des agents, erreurs, hallucinations. Difficulté à gérer les orchestrations complexes (plusieurs appels pour une seule action). Exposition des faiblesses de l'API (schéma lourd, endpoints obsolètes). Maintenance accrue lors des changements d'API. Meilleure approche : Construire des outils MCP comme des SDK pour agents, encapsulant la logique nécessaire pour accomplir une tâche spécifique. Pratiques recommandées : Concevoir autour des intentions/actions utilisateur (ex. : "créer un projet", "résumer un document"). Regrouper les appels en workflows ou actions uniques. Utiliser un langage naturel pour les définitions et les noms. Limiter la surface d'exposition de l'API pour la sécurité et la clarté. Appliquer des schémas d'entrée/sortie stricts pour guider l'agent et réduire l'ambiguïté. Des agents en production avec AWS - https://blog.ippon.fr/2025/12/22/des-agents-en-production-avec-aws/ AWS re:Invent 2025 a massivement mis en avant l'IA générative et les agents IA Un agent IA combine un LLM, une boucle d'appel et des outils invocables Strands Agents SDK facilite le prototypage avec boucles ReAct intégrées et gestion de la mémoire Managed MLflow permet de tracer les expérimentations et définir des métriques de performance Nova Forge optimise les modèles par réentraînement sur données spécifiques pour réduire coûts et latence Bedrock Agent Core industrialise le déploiement avec runtime serverless et auto-scaling Agent Core propose neuf piliers dont observabilité, authentification, code interpreter et browser managé Le protocole MCP d'Anthropic standardise la fourniture d'outils aux agents SageMaker AI et Bedrock centralisent l'accès aux modèles closed source et open source via API unique AWS mise sur l'évolution des chatbots vers des systèmes agentiques optimisés avec modèles plus frugaux Debezium 3.4 amène plusieurs améliorations intéressantes https://debezium.io/blog/2025/12/16/debezium-3-4-final-released/ Correction du problème de calcul du low watermark Oracle qui causait des pertes de performance Correction de l'émission des événements heartbeat dans le connecteur Oracle avec les requêtes CTE Amélioration des logs pour comprendre les transactions actives dans le connecteur Oracle Memory guards pour protéger contre les schémas de base de données de grande taille Support de la transformation des coordonnées géométriques pour une meilleure gestion des données spatiales Extension Quarkus DevServices permettant de démarrer automatiquement une base de données et Debezium en dev Intégration OpenLineage pour tracer la lignée des données et suivre leur flux à travers les pipelines Compatibilité testée avec Kafka Connect 4.1 et Kafka brokers 4.1 Infinispan 16.0.4 et .5 https://infinispan.org/blog/2025/12/17/infinispan-16-0-4 Spring Boot 4 et Spring 7 supportés Evolution dans les metriques Deux bugs de serialisation Construire un agent de recherche en Java avec l'API Interactions https://glaforge.dev/posts/2026/01/03/building-a-research-assistant-with-the-interactions-api-in-java/ Assistant de recherche IA Java (API Interactions Gemini), test du SDK implémenté par Guillaume. Workflow en 4 phases : Planification : Gemini Flash + Google Search. Recherche : Modèle "Deep Research" (tâche de fond). Synthèse : Gemini Pro (rapport exécutif). Infographie : Nano Banana Pro (à partir de la synthèse). API Interactions : gestion d'état serveur, tâches en arrière-plan, réponses multimodales (images). Appréciation : gestion d'état de l'API (vs LLM sans état). Validation : efficacité du SDK Java pour cas complexes. Stephan Janssen (le papa de Devoxx) a créé un serveur MCP (Model Context Protocol) basé sur LSP (Language Server Protocol) pour que les assistants de code analysent le code en le comprenant vraiment plutôt qu'en faisant des grep https://github.com/stephanj/LSP4J-MCP Le problème identifié : Les assistants IA utilisent souvent la recherche textuelle (type grep) pour naviguer dans le code, ce qui manque de contexte sémantique, génère du bruit (faux positifs) et consomme énormément de tokens inutilement. La solution LSP4J-MCP : Une approche "standalone" (autonome) qui encapsule le serveur de langage Eclipse (JDTLS) via le protocole MCP (Model Context Protocol). Avantage principal : Offre une compréhension sémantique profonde du code Java (types, hiérarchies, références) sans nécessiter l'ouverture d'un IDE lourd comme IntelliJ. Comparaison des méthodes : AST : Trop léger (pas de compréhension inter-fichiers). IntelliJ MCP : Puissant mais exige que l'IDE soit ouvert (gourmand en ressources). LSP4J-MCP : Le meilleur des deux mondes pour les workflows en terminal, à distance (SSH) ou CI/CD. Fonctionnalités clés : Expose 5 outils pour l'IA (find_symbols, find_references, find_definition, document_symbols, find_interfaces_with_method). Résultats : Une réduction de 100x des tokens utilisés pour la navigation et une précision accrue (distinction des surcharges, des scopes, etc.). Disponibilité : Le projet est open source et disponible sur GitHub pour intégration immédiate (ex: avec Claude Code, Gemini CLI, etc). A noter l'ajout dans claude code 2.0.74 d'un tool pour supporter LSP ( https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2074 ) Awesome (GitHub) Copilot https://github.com/github/awesome-copilot Une collection communautaire d'instructions, de prompts et de configurations pour optimiser l'utilisation de GitHub Copilot. Propose des "Agents" spécialisés qui s'intègrent aux serveurs MCP pour améliorer les flux de travail spécifiques. Inclut des prompts ciblés pour la génération de code, la documentation et la résolution de problèmes complexes. Fournit des instructions détaillées sur les standards de codage et les meilleures pratiques applicables à divers frameworks. Propose des "Skills" (compétences) sous forme de dossiers contenant des ressources pour des tâches techniques spécialisées. (les skills sont dispo dans copilot depuis un mois : https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/ ) Permet une installation facile via un serveur MCP dédié, compatible avec VS Code et Visual Studio. Encourage la contribution communautaire pour enrichir les bibliothèques de prompts et d'agents. Aide à augmenter la productivité en offrant des solutions pré-configurées pour de nombreux langages et domaines. Garanti par une licence MIT et maintenu activement par des contributeurs du monde entier. IA et productivité : bilan de l'année 2025 (Laura Tacho - DX)) https://newsletter.getdx.com/p/ai-and-productivity-year-in-review?aid=recNfypKAanQrKszT En 2025, l'ingénierie assistée par l'IA est devenue la norme : environ 90 % des développeurs utilisent des outils d'IA mensuellement, et plus de 40 % quotidiennement. Les chercheurs (Microsoft, Google, GitHub) soulignent que le nombre de lignes de code (LOC) reste un mauvais indicateur d'impact, car l'IA génère beaucoup de code sans forcément garantir une valeur métier supérieure. Si l'IA améliore l'efficacité individuelle, elle pourrait nuire à la collaboration à long terme, car les développeurs passent plus de temps à "parler" à l'IA qu'à leurs collègues. L'identité du développeur évolue : il passe de "producteur de code" à un rôle de "metteur en scène" qui délègue, valide et exerce son jugement stratégique. L'IA pourrait accélérer la montée en compétences des développeurs juniors en les forçant à gérer des projets et à déléguer plus tôt, agissant comme un "accélérateur" plutôt que de les rendre obsolètes. L'accent est mis sur la créativité plutôt que sur la simple automatisation, afin de réimaginer la manière de travailler et d'obtenir des résultats plus impactants. Le succès en 2026 dépendra de la capacité des entreprises à cibler les goulots d'étranglement réels (dette technique, documentation, conformité) plutôt que de tester simplement chaque nouveau modèle d'IA. La newsletter avertit que les titres de presse simplifient souvent à l'excès les recherches sur l'IA, masquant parfois les nuances cruciales des études réelles. Un développeur décrit dans un article sur Twitter son utilisation avancée de Claude Code pour le développement, avec des sous-agents, des slash-commands, comment optimiser le contexte, etc. https://x.com/AureaLibe/status/2008958120878330329?s=20 Outillage IntelliJ IDEA, thread dumps et project Loom (virtual threads) - https://blog.jetbrains.com/idea/2025/12/thread-dumps-and-project-loom-virtual-threads/ Les virtual threads Java améliorent l'utilisation du matériel pour les opérations I/O parallèles avec peu de changements de code Un serveur peut maintenant gérer des millions de threads au lieu de quelques centaines Les outils existants peinent à afficher et analyser des millions de threads simultanément Le débogage asynchrone est complexe car le scheduler et le worker s'exécutent dans des threads différents Les thread dumps restent essentiels pour diagnostiquer deadlocks, UI bloquées et fuites de threads Netflix a découvert un deadlock lié aux virtual threads en analysant un heap dump, bug corrigé dans Java 25. Mais c'était de la haute voltige IntelliJ IDEA supporte nativement les virtual threads dès leur sortie avec affichage des locks acquis IntelliJ IDEA peut ouvrir des thread dumps générés par d'autres outils comme jcmd Le support s'étend aussi aux coroutines Kotlin en plus des virtual threads Quelques infos sur IntelliJ IDEA 2025.3 https://blog.jetbrains.com/idea/2025/12/intellij-idea-2025-3/ Distribution unifiée regroupant davantage de fonctionnalités gratuites Amélioration de la complétion des commandes dans l'IDE Nouvelles fonctionnalités pour le débogueur Spring Thème Islands devient le thème par défaut Support complet de Spring Boot 4 et Spring Framework 7 Compatibilité avec Java 25 Prise en charge de Spring Data JDBC et Vitest 4 Support natif de Junie et Claude Agent pour l'IA Quota d'IA transparent et option Bring Your Own Key à venir Corrections de stabilité, performance et expérience utilisateur Plein de petits outils en ligne pour le développeur https://blgardner.github.io/prism.tools/ génération de mot de passe, de gradient CSS, de QR code encodage décodage de Base64, JWT formattage de JSON, etc. resumectl - Votre CV en tant que code https://juhnny5.github.io/resumectl/ Un outil en ligne de commande (CLI) écrit en Go pour générer un CV à partir d'un fichier YAML. Permet l'exportation vers plusieurs formats : PDF, HTML, ou un affichage direct dans le terminal. Propose 5 thèmes intégrés (Modern, Classic, Minimal, Elegant, Tech) personnalisables avec des couleurs spécifiques. Fonctionnalité d'initialisation (resumectl init) permettant d'importer automatiquement des données depuis LinkedIn et GitHub (projets les plus étoilés). Supporte l'ajout de photos avec des options de filtre noir et blanc ou de forme (rond/carré). Inclut un mode "serveur" (resumectl serve) pour prévisualiser les modifications en temps réel via un navigateur local. Fonctionne comme un binaire unique sans dépendances externes complexes pour les modèles. mactop - Un moniteur "top" pour Apple Silicon https://github.com/metaspartan/mactop Un outil de surveillance en ligne de commande (TUI) conçu spécifiquement pour les puces Apple Silicon (M1, M2, M3, M4, M5). Permet de suivre en temps réel l'utilisation du CPU (E-cores et P-cores), du GPU et de l'ANE (Neural Engine). Affiche la consommation électrique (wattage) du système, du CPU, du GPU et de la DRAM. Fournit des données sur les températures du SoC, les fréquences du GPU et l'état thermique global. Surveille l'utilisation de la mémoire vive, de la swap, ainsi que l'activité réseau et disque (E/S). Propose 10 mises en page (layouts) différentes et plusieurs thèmes de couleurs personnalisables. Ne nécessite pas l'utilisation de sudo car il s'appuie sur les API natives d'Apple (SMC, IOReport, IOKit). Inclut une liste de processus détaillée (similaire à htop) avec la possibilité de tuer des processus directement depuis l'interface. Offre un mode "headless" pour exporter les métriques au format JSON et un serveur optionnel pour Prometheus. Développé en Go avec des composants en CGO et Objective-C. Adieu direnv, Bonjour misehttps://codeka.io/2025/12/19/adieu-direnv-bonjour-mise/ L'auteur remplace ses outils habituels (direnv, asdf, task, just) par un seul outil polyvalent écrit en Rust : mise. mise propose trois fonctions principales : gestionnaire de paquets (langages et outils), gestionnaire de variables d'environnement et exécuteur de tâches. Contrairement à direnv, il permet de gérer des alias et utilise un fichier de configuration structuré (mise.toml) plutôt que du scripting shell. La configuration est hiérarchique, permettant de surcharger les paramètres selon les répertoires, avec un système de "trust" pour la sécurité. Une "killer-feature" soulignée est la gestion des secrets : mise s'intègre avec age pour chiffrer des secrets (via clés SSH) directement dans le fichier de configuration. L'outil supporte une vaste liste de langages et d'outils via un registre interne et des plugins (compatibilité avec l'écosystème asdf). Il simplifie le workflow de développement en regroupant l'installation des outils et l'automatisation des tâches au sein d'un même fichier. L'auteur conclut sur la puissance, la flexibilité et les excellentes performances de l'outil après quelques heures de test. Claude Code v2.1.0 https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#210 Rechargement à chaud des "skills" : Les modifications apportées aux compétences dans ~/.claude/skills sont désormais appliquées instantanément sans redémarrer la session. Sous-agents et forks : Support de l'exécution de compétences et de commandes slash dans un contexte de sous-agent forké via context: fork. Réglages linguistiques : Ajout d'un paramètre language pour configurer la langue de réponse par défaut (ex: language: "french"). Améliorations du terminal : Shift+Enter fonctionne désormais nativement dans plusieurs terminaux (iTerm2, WezTerm, Ghostty, Kitty) sans configuration manuelle. Sécurité et correction de bugs : Correction d'une faille où des données sensibles (clés API, tokens OAuth) pouvaient apparaître dans les logs de débogage. Nouvelles commandes slash : Ajout de /teleport et /remote-env pour les abonnés claude.ai afin de gérer des sessions distantes. Mode Plan : Le raccourci /plan permet d'activer le mode plan directement depuis le prompt, et la demande de permission à l'entrée de ce mode a été supprimée. Vim et navigation : Ajout de nombreux mouvements Vim (text objects, répétitions de mouvements f/F/t/T, indentations, etc.). Performance : Optimisation du temps de démarrage et du rendu terminal pour les caractères Unicode/Emoji. Gestion du gitignore : Support du réglage respectGitignore dans settings.json pour contrôler le comportement du sélecteur de fichiers @-mention. Méthodologies 200 déploiements en production par jour, même le vendredi : retours d'expérience https://mcorbin.fr/posts/2025-03-21-deploy-200/ Le déploiement fréquent, y compris le vendredi, est un indicateur de maturité technique et augmente la productivité globale. L'excellence technique est un atout stratégique indispensable pour livrer rapidement des produits de qualité. Une architecture pragmatique orientée services (SOA) facilite les déploiements indépendants et réduit la charge cognitive. L'isolation des services est cruciale : un développeur doit pouvoir tester son service localement sans dépendre de toute l'infrastructure. L'automatisation via Kubernetes et l'approche GitOps avec ArgoCD permettent des déploiements continus et sécurisés. Les feature flags et un système de permissions solide permettent de découpler le déploiement technique de l'activation fonctionnelle pour les utilisateurs. L'autonomie des développeurs est renforcée par des outils en self-service (CLI maison) pour gérer l'infrastructure et diagnostiquer les incidents sans goulot d'étranglement. Une culture d'observabilité intégrée dès la conception permet de détecter et de réagir rapidement aux anomalies en production. Accepter l'échec comme inévitable permet de concevoir des systèmes plus résilients capables de se rétablir automatiquement. "Vibe Coding" vs "Prompt Engineering" : l'IA et le futur du développement logiciel https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/ L'IA est passée du statut d'expérimentation à celui d'infrastructure essentielle pour le développement de logiciels en 2025. L'IA ne remplace pas les ingénieurs, mais agit comme un amplificateur de leurs compétences, de leur jugement et de la qualité de leur réflexion. Distinction entre le "Vibe Coding" (rapide, intuitif, idéal pour les prototypes) et le "Prompt Engineering" (délibéré, contraint, nécessaire pour les systèmes maintenables). L'importance cruciale du contexte ("Context Engineering") : l'IA devient réellement puissante lorsqu'elle est connectée aux systèmes réels (GitHub, Jira, etc.) via des protocoles comme le MCP. Utilisation d'agents spécialisés (écriture de RFC, revue de code, architecture) plutôt que de modèles génériques pour obtenir de meilleurs résultats. Émergence de l'ingénieur "Technical Product Manager" capable d'abattre seul le travail d'une petite équipe grâce à l'IA, à condition de maîtriser les fondamentaux techniques. Le risque majeur : l'IA permet d'aller très vite dans la mauvaise direction si le jugement humain et l'expérience font défaut. Le niveau d'exigence global augmente : les bases techniques solides deviennent plus importantes que jamais pour éviter l'accumulation de dette technique rapide. Une revue de code en solo (Kent Beck) ! https://tidyfirst.substack.com/p/party-of-one-for-code-review?r=64ov3&utm_campaign=post&utm_medium=web&triedRedirect=true La revue de code traditionnelle, héritée des inspections formelles d'IBM, s'essouffle car elle est devenue trop lente et asynchrone par rapport au rythme du développement moderne. Avec l'arrivée de l'IA ("le génie"), la vitesse de production du code dépasse la capacité de relecture humaine, créant un goulot d'étranglement majeur. La revue de code doit évoluer vers deux nouveaux objectifs prioritaires : un "sanity check" pour vérifier que l'IA a bien fait ce qu'on lui demandait, et le contrôle de la dérive structurelle de la base de code. Maintenir une structure saine est crucial non seulement pour les futurs développeurs humains, mais aussi pour que l'IA puisse continuer à comprendre et modifier le code efficacement sans perdre le contexte. Kent Beck expérimente des outils automatisés (comme CodeRabbit) pour obtenir des résumés et des schémas d'architecture afin de garder une conscience globale des changements rapides. Même si les outils automatisés sont utiles, le "Pair Programming" reste irremplaçable pour la richesse des échanges et la pression sociale bénéfique qu'il impose à la réflexion. La revue de code solo n'est pas une fin en soi, mais une adaptation nécessaire lorsque l'on travaille seul avec des outils de génération de code augmentés. Loi, société et organisation Lego lance les Lego Smart Play, avec des Brique, des Smart Tags et des Smart Figurines pour faire de nouvelles constructions interactives avec des Legos https://www.lego.com/fr-fr/smart-play LEGO SMART Play : technologie réactive au jeu des enfants. Trois éléments clés : SMART Brique : Brique LEGO 2x4 "cerveau". Accéléromètre, lumières réactives, détecteur de couleurs, synthétiseur sonore. Réagit aux mouvements (tenir, tourner, taper). SMART Tags : Petites pièces intelligentes. Indiquent à la SMART Brique son rôle (ex: hélicoptère, voiture) et les sons à produire. Activent sons, mini-jeux, missions secrètes. SMART Minifigurines : Activées près d'une SMART Brique. Révèlent des personnalités uniques (sons, humeurs, réactions) via la SMART Brique. Encouragent l'imagination. Fonctionnement : SMART Brique détecte SMART Tags et SMART Minifigurines. Réagit aux mouvements avec lumières et sons dynamiques. Compatibilité : S'assemble avec les briques LEGO classiques. Objectif : Créer des expériences de jeu interactives, uniques et illimitées. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 5 février 2026 : Web Days Convention - Aix-en-Provence (France) 12 février 2026 : Strasbourg Craft #1 - Strasbourg (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 9-10 avril 2026 : AndroidMakers by droidcon - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 24-25 avril 2026 : Faiseuses du Web 5 - Dinan (France) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Accenture, American Express, ASOS, EY, Four Seasons Hotels, Google, NBC Universal are his clientsFrm Royal Air Force Senior Officer, Frm. International Negotiator for the UK Government, executive coach. Google, Accenture, American Express His first book, 'Find Your Why: A Practical Guide for Discovering Purpose for You and Your Team', co-authored with Simon Sinek and David Mead. Peter gets up every day inspired to enable people to be extraordinary so that they can do extraordinary things. Collaborating with Simon Sinek for over 7 years, he was a founding Igniter and Implementation Specialist on the Start With Why team, teaching leaders and companies how to use the concept of Why."The first step is to distinguish leadership from management. “Management is about handling complexity,” explains Docker, while “leadership is about creating simplicity. It's about cutting through the noise, identifying what's really important, making it personal for people, bringing them together and connecting them.” ~ Peter Docker in Venteur Magazine January 2023One of Peter's latest books, 'Leading from The Jumpseat: How to Create Extraordinary Opportunities by Handing Over Control'Peter's commercial and industry experience has been at the most senior levels in sectors including oil & gas, construction, mining, pharmaceuticals, banking, television, film, media, manufacturing and services - across more than 90 countries. His career has spanned professional pilot; leading an aviation training and standards organisation; teaching post-graduates at an international college; and running multi-billion dollar procurement projects. A former Royal Air Force senior officer, he has been a Force Commander during combat flying operations and has seen service across the world. He is a seasoned crisis manager, a former international negotiator for the UK Government and executive coach.© 2026 Building Abundant Success!!2026 All Rights ReservedJoin Me on ~ iHeart Media @ https://tinyurl.com/iHeartBASSpot Me on Spotify: https://tinyurl.com/yxuy23bAmazon Music ~ https://tinyurl.com/AmzBASAudacy: https://tinyurl.com/BASAud
En este episodio de Atareao con Linux, te hablo sobre una de las transiciones más importantes que he hecho últimamente en mi flujo de trabajo: el paso del scripting tradicional hacia el uso de Rust como lenguaje de cabecera para resolver mis problemas cotidianos.A pesar de que Fish se ha convertido en mi shell por defecto por lo intuitivo que resulta, el día a día me sigue obligando a crear scripts en Bash que, con el tiempo, terminan convirtiéndose en auténticos mastodontes difíciles de gestionar. Cuando un pequeño script crece demasiado, aparecen las limitaciones: una gestión de errores muy limitada, la falta de tests bien integrados, la ausencia de tipado que provoca errores difíciles de depurar y la pesadilla de manejar JSON o YAML dependiendo siempre de herramientas externas como jq.Para solucionar esto sin tener que montar un proyecto completo de Rust con su Cargo.toml cada vez que quiero hacer algo sencillo, he recuperado una herramienta que es pura magia: Rust Script. Con ella, tengo lo mejor de los dos mundos: la potencia, seguridad y velocidad de Rust, pero con la agilidad y simplicidad de un script de toda la vida.En este episodio te cuento:Mi visión sobre Rust Script: Cómo lo utilizo como un lenguaje de scripting puro para sustituir a Bash cuando la cosa se complica.El fin de la sobrecarga: Te explico cómo escribo scripts sin configurar proyectos completos, eliminando de un plumazo la burocracia de archivos de configuración.Gestión de dependencias: Te muestro cómo declaro los crates que necesito directamente dentro del código mediante comentarios, haciendo que mis scripts sean totalmente autónomos y fáciles de mover de un sitio a otro.Bajo el capó: Cómo funciona el sistema de caché y compilación para que, tras la primera ejecución, tus herramientas vuelen y sean instantáneas.Ejemplos reales: Desde un "Hola Mundo" básico hasta herramientas que consultan APIs REST y procesan información de forma nativa sin herramientas de terceros.Velocidad y fiabilidad: Por qué prefiero un binario tipado y testeado antes que una cadena de comandos en Bash donde un error en una tubería puede pasar desapercibido.Además, aprovecho para adelantarte los próximos episodios técnicos donde voy a meterle mano a fondo a Podman. Quiero explicarte por qué, al haber nacido en Linux, tiene una integración mucho más natural que Docker y cómo pienso sacarle todo el partido.Si tú también sientes que tus scripts de Bash se te están yendo de las manos, te invito a escuchar este episodio y descubrir cómo optimizar tu trabajo diario.Este podcast forma parte de la red de Sospechosos Habituales. Puedes encontrar todos los detalles y los scripts que menciono en las notas del episodio en mi web, atareao.es.¿Te gustaría que en el próximo episodio hiciera la migración en directo de uno de mis scripts de Bash a Rust Script para que veas el proceso paso a paso?Timestamp00:00:00 Introducción y la transición de Bash a Fish00:00:50 Las limitaciones de los scripts complejos en Bash y Fish00:01:26 Por qué elegir Rust para optimizar el trabajo00:02:07 Introducción a Rust Script: Rust como lenguaje de scripting00:02:36 Próximos episodios técnicos: Profundizando en Podman00:03:59 Problemas comunes en Bash: Errores, tipos y datos estructurados00:04:50 El punto intermedio: Rust Script vs. proyectos completos con Cargo00:06:47 Ejemplo práctico: Cómo crear un "Hola Mundo" con Rust Script00:08:07 Funcionamiento interno: Compilación y caché de binarios00:09:43 Ejecución directa y permisos en archivos .rs00:10:44 Cómo instalar Rust Script en Arch Linux y vía Cargo00:11:32 Gestión de dependencias elegante dentro del script00:14:03 Ejemplo avanzado: Uso de múltiples crates y peticiones web00:16:32 Ventajas de trabajar con APIs JSON sin herramientas externas00:18:25 Resumen de beneficios: Potencia, tipado y velocidad00:20:13 Despedida y red de podcast Sospechosos Habituales
This week, we discuss AI's impact on Stack Overflow, Docker's Hardened Images, and Nvidia buying Groq. Plus, thoughts on playing your own game and having fun. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/LQSxLbjvz3c?si=ao8f3hwxlCrmH1vX) 554 (https://www.youtube.com/live/LQSxLbjvz3c?si=ao8f3hwxlCrmH1vX) Please complete the Software Defined Talk Listener Survey! (https://docs.google.com/forms/d/e/1FAIpQLSfl7eHWQJwu2tBLa-FjZqHG2nr6p_Z3zQI3Pp1EyNWQ8Fu-SA/viewform?usp=header) Runner-up Titles It's all brisket after that. Exploring Fun Should I go build a snow man? Pets Innersourcing Two books Michael Lewis should write. Article IV is foundational. Freedom is options. Rundown Stack Overflow is dead. (https://x.com/rohanpaul_ai/status/2008007012920209674?s=20) Hardened Images for Everyone (https://www.docker.com/blog/docker-hardened-images-for-every-developer/) Tanzu's Bitnami stuff does this too (https://blogs.vmware.com/tanzu/what-good-software-supply-chain-security-looks-like-for-highly-regulated-industries/). OpenAI OpenAI's New Fundraising Round Could Value Startup at as Much as $830 Billion (https://www.wsj.com/tech/ai/openais-new-fundraising-round-could-value-startup-at-a[…]4238&segment_id=212500&user_id=c5a514ba8b7d9a954711959a6031a3fa) OpenAI Reportedly Planning to Make ChatGPT "Prioritize" Advertisers in Conversation (https://futurism.com/artificial-intelligence/openai-chatgpt-sponsored-ads) OpenAI bets big on audio as Silicon Valley declares war on screens (https://techcrunch.com/2026/01/01/openai-bets-big-on-audio-as-silicon-valley-declares-war-on-screens/) Sam Altman says: He has zero percent interest in remaining OpenAI CEO, once (https://timesofindia.indiatimes.com/technology/tech-news/sam-altman-says-he-has-zero-percent-interest-remaining-openai-ceo-once-/articleshow/126350602.cms) Nvidia buying AI chip startup Groq's assets for about $20 billion in its largest deal on record (https://www.cnbc.com/2025/12/24/nvidia-buying-ai-chip-startup-groq-for-about-20-billion-biggest-deal.html) Relevant to your Interests Broadcom IT uses Tanzu Platform to host MCP Servers (https://news.broadcom.com/app-dev/broadcom-tanzu-platform-agentic-business-transformation). A Brief History Of The Spreadsheet (https://hackaday.com/2025/12/15/a-brief-history-of-the-spreadsheet/) Databricks is raising over $4 billion in Series L funding at a $134 billion (https://x.com/exec_sum/status/2000971604449485132?s=20) Amazon's big AGI reorg decoded by Corey Quinn (https://www.theregister.com/2025/12/17/jassy_taps_peter_desantis_to_run_agi/) “They burned millions but got nothing.” (https://automaton-media.com/en/news/japanese-game-font-services-aggressive-price-hike-could-be-result-of-parent-companys-alleged-ai-failu/) X sues to protect Twitter brand Musk has been trying to kill (https://www.theregister.com/2025/12/17/x_twitter_brand_lawsuit/) Mozilla's new CEO says AI is coming to Firefox, but will remain a choice | TechCrunch (https://techcrunch.com/2025/12/17/mozillas-new-ceo-says-ai-is-coming-to-firefox-but-will-remain-a-choice/) Why Oracle keeps sparking AI-bubble fears (https://www.axios.com/2025/12/18/ai-oracle-stock-blue-owl) What's next for Threads (https://sources.news/p/whats-next-for-threads) Salesforce Executives Say Trust in Large Language Models Has Declined (https://www.theinformation.com/articles/salesforce-executives-say-trust-generative-ai-declined?rc=giqjaz) Akamai Technologies Announces Acquisition of Function-as-a-Service Company Fermyon (https://www.akamai.com/newsroom/press-release/akamai-announces-acquisition-of-function-as-a-service-company-fermyon) Google Rolling Out Gmail Address Change Feature: Here Is How It Works (https://finance.yahoo.com/news/google-rolling-gmail-address-change-033112607.html) The Enshittifinancial Crisis (https://www.wheresyoured.at/the-enshittifinancial-crisis/) MongoBleed: Critical MongoDB Vulnerability CVE-2025-14847 | Wiz Blog (https://www.wiz.io/blog/mongobleed-cve-2025-14847-exploited-in-the-wild-mongodb) Softbank to buy data center firm DigitalBridge for $4 billion in AI push (https://www.cnbc.com/amp/2025/12/29/digitalbridge-shares-jump-on-report-softbank-in-talks-to-acquire-firm.html) The best tech announced at CES 2026 so far (https://www.theverge.com/tech/854159/ces-2026-best-tech-gadgets-smartphones-appliances-robots-tvs-ai-smart-home) Who's who at X, the deepfake porn site formerly known as Twitter (https://www.ft.com/content/ad94db4c-95a0-4c65-bd8d-3b43e1251091?accessToken=zwAGR7kzep9gkdOtlNtMlaBMZdO9jTtD4SUQkQ.MEYCIQCdZajuC9uga-d9b5Z1t0HI2BIcnkVoq98loextLRpCTgIhAPL3rW72aTHBNL_lS7s1ONpM2vBgNlBNHDBeGbHkPkZj&sharetype=gift&token=a7473827-0799-4064-9008-bf22b3c99711) Manus Joins Meta for Next Era of Innovation (https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation) The WELL: State of the World 2026 with Bruce Sterling and Jon Lebkowsky (https://people.well.com/conf/inkwell.vue/topics/561/State-of-the-World-2026-with-Bru-page01.html) Virtual machines still run the world (https://cote.io/2026/01/07/virtual-machines-still-run-the.html) Databases in 2025: A Year in Review (https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html) Chat Platform Discord Files Confidentially for IPO (https://www.bloomberg.com/news/articles/2026-01-06/chat-platform-discord-is-said-to-file-confidentially-for-ipo?embedded-checkout=true) The DRAM shortage explained: AI, rising prices, and what's next (https://www.techradar.com/pro/why-is-ram-so-expensive-right-now-its-more-complicated-than-you-think) Nonsense Palantir CEO buys monastery in Old Snowmass for $120 million (https://www.denverpost.com/2025/12/17/palantir-alex-karp-snowmass-monastery/amp/) H-E-B gives free groceries to all customers after registers glitch today in Burleson, Texas. (https://www.reddit.com/r/interestingasfuck/s/ZEcblg7atP) Conferences cfgmgmtcamp 2026 (https://cfgmgmtcamp.org/ghent2026/), February 2nd to 4th, Ghent, BE. Coté speaking - anyone interested in being a SDI guest? DevOpsDayLA at SCALE23x (https://www.socallinuxexpo.org/scale/23x), March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026 (https://devnexus.com), March 4th to 6th, Atlanta, GA. Coté has a discount code, but he's not sure if he can give it out. He's asking! Send him a DM in the meantime. KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. Whole bunch of VMUGs, mostly in the US. The CFPs are open (https://app.sessionboard.com/submit/vmug-call-for-content-2026/ae1c7013-8b85-427c-9c21-7d35f8701bbe?utm_campaign=5766542-VMUG%20Voice&utm_medium=email&_hsenc=p2ANqtz-_YREN7dr6p3KSQPYkFSN5K85A-pIVYZ03ZhKZOV0O3t3h0XHdDHethhx5O8gBFguyT5mZ3n3q-ZnPKvjllFXYfWV3thg&_hsmi=393690000&utm_content=393685389&utm_source=hs_email), go speak at them! Coté speaking in Amsterdam. Amsterdam (March 17-19, 2026), Minneapolis (April 7-9, 2026), Toronto (May 12-14, 2026), Dallas (June 9-11, 2026), Orlando (October 20-22, 2026) SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Why Data Doesn't Always Win, with a Philosopher of Art (https://podcasts.apple.com/us/podcast/the-points-you-shouldnt-score-a-new-years-resolution/id1685093486?i=1000743950053) (Apple Podcasts) Why Data Doesn't Always Win, with a Philosopher of Art (https://www.youtube.com/watch?v=7AdbePyGS2M&list=RD7AdbePyGS2M&start_radio=1) (YouTube) Coté: “Databases in 2025: A Year in Review.” (https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html) Photo Credits Header (https://unsplash.com/photos/red-and-black-love-neon-light-signage-igJrA98cf4A)
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
¿Alguna vez has sentido que tu servidor Linux se queda sin aliento por falta de espacio? Muchas veces nos obsesionamos con comprar más almacenamiento, ampliar la cuota de nuestro VPS o añadir discos al NAS, pero la solución no suele ser gastar más, sino limpiar mejor. En este episodio, te cuento cómo descubrí que tenía un "inquilino" con síndrome de Diógenes digital: Docker.Acompáñame en esta "limpieza de año nuevo" donde pasamos la motosierra a más de 600 GB de basura que no servían para nada y que Docker había ido acumulando silenciosamente en mi sistema.No podemos limpiar lo que no vemos. El primer paso es realizar un análisis profundo del almacenamiento. Mientras que herramientas como df -h o dust nos dan una idea general, Docker requiere una mirada específica para entender su estructura de datos.El comando estrella: docker system df es la herramienta fundamental que te dirá exactamente cuántas imágenes, contenedores y volúmenes tienes, pero sobre todo, cuánto espacio es reclaimable (recuperable inmediatamente).En mi caso, el gran culpable no eran las imágenes activas, sino el Build Cache, con cientos de capas intermedias acumuladas tras semanas programando y compilando proyectos en Rust.No se trata de borrar por borrar. Si eliminas todo el caché de construcción de golpe, la próxima vez que necesites compilar un proyecto el proceso será eterno porque Docker tendrá que descargar y reconstruir todo desde cero.Limpieza Quirúrgica: La clave está en usar filtros temporales como --filter "until=48h". Esto permite mantener lo que estás usando en tu desarrollo diario mientras fulminas los archivos obsoletos de la semana pasada.Volúmenes Huérfanos: Aprenderemos a identificar esos datos que quedan "vivos" después de que el contenedor ha sido eliminado y que solo sirven para ocupar espacio innecesario.El problema de los Logs: Si no configuras la rotación, los registros de texto pueden crecer hasta el infinito. Te enseño a usar fd para localizarlos de forma rápida y cómo configurar el archivo daemon.json para limitar su tamaño de forma definitiva.Como buen amante de la eficiencia y la productividad, no quiero tener que ejecutar estos comandos a mano cada semana. He diseñado un script de limpieza llamado docker-clean que gestiona de forma segura contenedores parados, imágenes sin etiqueta y el monstruoso caché de construcción.Para que el sistema se mantenga "hecho un figurín" de forma automática, te explico cómo programar este script mediante un Systemd Timer. Así, cada domingo a las 8:00 AM, tu servidor realizará su propia labor de mantenimiento sin que tú tengas que mover un dedo.00:00:00 La obsesión por el almacenamiento00:01:18 Docker: Un acumulador de basura en tu sistema00:03:40 Diagnóstico profundo con df y dust00:05:45 El comando estrella: docker system df00:07:32 El monstruo del Build Cache y las capas intermedias00:10:24 Volúmenes huérfanos: el cementerio de datos00:11:21 Gestión quirúrgica de registros (logs) con fd00:14:36 Mi script personal de limpieza: docker-clean00:16:18 Automatización total con Systemd Timers00:18:07 Conclusión y propósito de año nuevoPuedes encontrar todos los comandos, el script de limpieza completo y las configuraciones del timer y el servicio en las notas detalladas del episodio en: https://atareao.es¿Y tú? ¿Has comprobado cuánto espacio te está robando Docker hoy? Pásate por el grupo de Telegram t.me/atareao_con_linux y comparte con la comunidad cuántos gigas has logrado recuperar.#Linux #Docker #SelfHosted #DevOps #Atareao #Productividad #SysAdmin #Rust #OpenSourceMás información y enlaces en las notas del episodio
In this episode of the Ardan Labs Podcast, Bill Kennedy talks with Miles Spencer, founder of Reflekta, about using AI to preserve the stories and legacies of loved ones. Miles explains how Reflekta enables meaningful, ongoing conversations with elders through AI while prioritizing privacy, ethics, and emotional responsibility.They explore the technical foundations of the platform, the challenges of building trust around deeply personal data, and the business model behind Reflekta. Miles also shares his journey from high school to entrepreneurship and reflects on how technology can foster genuine human connection rather than replace it.00:00 Introduction 02:35 How Reflekta Uses AI05:10 Media Processing and Conversations08:26 Privacy, Security, and Ethics13:25 Miles' Background and Journey27:57 Early Entrepreneurship and Family Challenges36:49 Finance, Venture Capital, and Media44:57 New Ventures and Career Shifts56:54 COVID-19 and Business Impact01:00:04 The Birth of Reflekta01:09:03 Ethical Challenges and the Road AheadConnect with Miles: LinkedIn: https://www.linkedin.com/in/milesspencer/Mentioned in this Episode:Reflekta: https://reflekta.ai/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
Topics covered in this episode: ty: An extremely fast Python type checker and LSP Python Supply Chain Security Made Easy typing_extensions MI6 chief: We'll be as fluent in Python as we are in Russian Extras Joke Watch on YouTube About the show Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: ty: An extremely fast Python type checker and LSP Charlie Marsh announced the Beta release of ty on Dec 16 “designed as an alternative to tools like mypy, Pyright, and Pylance.” Extremely fast even from first run Successive runs are incremental, only rerunning necessary computations as a user edits a file or function. This allows live updates. Includes nice visual diagnostics much like color enhanced tracebacks Extensive configuration control Nice for if you want to gradually fix warnings from ty for a project Also released a nice VSCode (or Cursor) extension Check the docs. There are lots of features. Also a note about disabling the default language server (or disabling ty's language server) so you don't have 2 running Michael #2: Python Supply Chain Security Made Easy We know about supply chain security issues, but what can you do? Typosquatting (not great) Github/PyPI account take-overs (very bad) Enter pip-audit. Run it in two ways: Against your installed dependencies in current venv As a proper unit test (so when running pytest or CI/CD). Let others find out first, wait a week on all dependency updates: uv pip compile requirements.piptools --upgrade --output-file requirements.txt --exclude-newer "1 week" Follow up article: DevOps Python Supply Chain Security Create a dedicated Docker image for testing dependencies with pip-audit in isolation before installing them into your venv. Run pip-compile / uv lock --upgrade to generate the new lock file Test in a ephemeral pip-audit optimized Docker container Only then if things pass, uv pip install / uv sync Add a dedicated Docker image build step that fails the docker build step if a vulnerable package is found. Brian #3: typing_extensions Kind of a followup on the deprecation warning topic we were talking about in December. prioinv on Mastodon notified us that the project typing-extensions includes it as part of the backport set. The warnings.deprecated decorator is new to Python 3.13, but with typing-extensions, you can use it in previous versions. But typing_extesions is way cooler than just that. The module serves 2 purposes: Enable use of new type system features on older Python versions. Enable experimentation with type system features proposed in new PEPs before they are accepted and added to the typing module. So cool. There's a lot of features here. I'm hoping it allows someone to use the latest typing syntax across multiple Python versions. I'm “tentatively” excited. But I'm bracing for someone to tell me why it's not a silver bullet. Michael #4: MI6 chief: We'll be as fluent in Python as we are in Russian "Advances in artificial intelligence, biotechnology and quantum computing are not only revolutionizing economies but rewriting the reality of conflict, as they 'converge' to create science fiction-like tools,” said new MI6 chief Blaise Metreweli. She focused mainly on threats from Russia, the country is "testing us in the grey zone with tactics that are just below the threshold of war.” This demands what she called "mastery of technology" across the service, with officers required to become "as comfortable with lines of code as we are with human sources, as fluent in Python as we are in multiple other languages." Recruitment will target linguists, data scientists, engineers, and technologists alike. Extras Brian: Next chapter of Lean TDD being released today, Finding Waste in TDD Still going to attempt a Jan 31 deadline for first draft of book. That really doesn't seem like enough time, but I'm optimistic. SteamDeck is not helping me find time to write But I very much appreciate the gift from my fam Send me game suggestions on Mastodon or Bluesky. I'd love to hear what you all are playing. Michael: Astral has announced the Beta release of ty, which they say they are "ready to recommend to motivated users for production use." Blog post Release page Reuven Lerner has a video series on Pandas 3 Joke: Error Handling in the age of AI Play on the inversion of JavaScript the Good Parts
In this episode I talk with Lio Lunesu, CTO of Defang, about infrastructure as code, Docker, and Docker Compose. Defang compiles Docker Compose files into cloud infrastructure code.Links:DefangLio Lunesu on LinkedInSaturnCINonsense Monthly
This week, Brian Gracely joins to dissect strategic choices made by Broadcom, Docker, Netflix and Intel. Plus: The AI Bifurcation—are models commodities or product pillars? Rundown Licensing in VMware Cloud Foundation 9.0 (https://blogs.vmware.com/cloud-foundation/2025/06/24/licensing-in-vmware-cloud-foundation-9-0/) Hardened Images for Everyone (https://www.docker.com/blog/docker-hardened-images-for-every-developer/) Introducing Chainguard EmeritOSS (https://www.chainguard.dev/unchained/introducing-chainguard-emeritoss) Netflix to Acquire Warner Bros. (https://about.netflix.com/en/news/netflix-to-acquire-warner-bros) Anthropic reportedly preparing for one of the largest IPOs (https://www.cnbc.com/2025/12/03/anthropic-claude-reportedly-preparing-ipo-race-openai-chatgpt-ft-wilson-sonsini-goodrich-rosati.html) Conferences cfgmgmtcamp 2026 (https://cfgmgmtcamp.org/ghent2026/), February 2nd to 4th, Ghent, BE. Coté speaking and doing live SDI (https://www.softwaredefinedinterviews.com) with John Willis. DevOpsDayLA at SCALE23x (https://www.socallinuxexpo.org/scale/23x), March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026 (https://devnexus.com), March 4th to 6th, Atlanta, GA. Whole bunch of VMUGs, mostly in the US. The CFPs are open (https://app.sessionboard.com/submit/vmug-call-for-content-2026/ae1c7013-8b85-427c-9c21-7d35f8701bbe?utm_campaign=5766542-VMUG%20Voice&utm_medium=email&_hsenc=p2ANqtz-_YREN7dr6p3KSQPYkFSN5K85A-pIVYZ03ZhKZOV0O3t3h0XHdDHethhx5O8gBFguyT5mZ3n3q-ZnPKvjllFXYfWV3thg&_hsmi=393690000&utm_content=393685389&utm_source=hs_email), go speak at them! Coté speaking in Amsterdam. Amsterdam (March 17-19, 2026), Minneapolis (April 7-9, 2026), Toronto (May 12-14, 2026), Dallas (June 9-11, 2026), Orlando (October 20-22, 2026) SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Photo Credits Header (https://unsplash.com/photos/university-of-alabama-football-game-tuscaloosa-alabama-YcVe7gL9A0s) Special Guest: Brian Gracely.
¡Bienvenidos a un nuevo episodio de Atareao con Linux! Estamos cerrando este 2025 y toca hacer balance de las herramientas que realmente han marcado la diferencia en mi servidor. En este episodio te traigo una selección personal de los 6 contenedores Docker que se han vuelto imprescindibles en mi día a día. No solo por su funcionalidad, sino por cómo han simplificado mi flujo de trabajo, mejorado la seguridad y optimizado el rendimiento de mis sistemas.Lo que encontrarás en este episodio:1. Dodge: La alternativa ligera a Portainer que gestiona tus archivos YAML de forma reactiva y permite convertir comandos docker run a compose fácilmente.2. Pocket ID: Seguridad "passwordless" al siguiente nivel. Centraliza tus accesos mediante Passkeys y biometría con soporte OIDC.3. Doc Pick: El sustituto perfecto para la gestión de actualizaciones. Mantén tus imágenes al día con un panel visual intuitivo.4. Beszel: Monitorización de recursos (CPU, RAM, Disco) tanto del host como de cada contenedor individual.5. Quantum: Un fork de File Browser potenciado con búsqueda por lógica difusa e integración con OIDC.6. Memos: Tu centro de micro-notas privado y extremadamente versátil gracias a su API.Timestamps:00:00:00 Introducción y balance del año 202500:01:05 Los seis contenedores imprescindibles de 202500:02:14 Consideraciones sobre el flujo de trabajo y mantenimiento00:03:56 Dodge: El reemplazo moderno y ligero de Portainer00:06:07 Pocket ID: Autenticación passwordless y OIDC00:08:01 Doc Pick: Información y actualización de imágenes de contenedores00:10:40 Beszel: Monitor de recursos de CPU, RAM y Docker00:12:42 Quantum: Fork de File Browser con integración OIDC00:14:39 Memos: Gestión de micronotas y enlaces mediante API00:18:28 Resumen final y despedidaRecursos y enlaces:Más información y enlaces en las notas del episodio
Todos sabemos que por muy bueno que sea un hardware, si el software no acompaña, no vale de nada. En esta ocasión te cuento por qué he elegido Proxmox como sistema base, con TrueNAS virtualizado y Docker dentro del mismo.
Links James on LinkedIn (https://www.linkedin.com/in/jahuang/) Mike on LinkedIn (https://www.linkedin.com/in/dominucco/) Mike's Blog (https://dominickm.com) Show on Discord (https://discord.com/invite/k8e7gKUpEp) Alice Promo (https://go.alice.dev/data-migration-offer-hands-on) AI on Red Hat Enterprise Linux (RHEL) Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised. Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms. Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments. Rama-Llama & Containerization Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp. Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes. Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises. Enterprise AI Infrastructure Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation). Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.
This week I'm the guest and my friends at Whiskey Web and Whatnot are the hosts. And they're great hosts, because they send their guests a bottle of whiskey before talking web and whatnot...As we head into the holidays I hope you'll raise a glass with us and enjoy this very laid back episode... Chuck and Robbie hosted me a year ago and I love that they got me on tape when they did, because it was just as I was starting to consider making some big changes to my show... Changes that I will announce in late January... so get excited for that! and please subscribe to this here podcsat in your favorite apps, and get the newsletter at crafted.fmHere's how they described the episode:Robbie and Chuck talk with Dan Blumberg about his journey from radio producer to product manager and podcaster. They explore the art of building great software, podcasting essentials, and the changing landscape of podcast platforms. Plus, Dan shares his kayaking adventures and insights on balancing authenticity and growth.And if you please…Subscribe to the CRAFTED. newsletter atcrafted.fmShare with a friend! Word of mouth is by far the most powerful way for podcasts to growSponsor the show? I'm actively speaking to potential sponsors for 2026 episodes. Drop me a line and let's talk.Get psyched!… There are some big updates to this show coming soonFor more on Whiskey Web and Whatnot...Check ou:t https://whiskey.fmConnect with Robbie Wagner: https://x.com/RobbieTheWagnerConnect with Chuck Carpenter: https://x.com/CharlesWthe3rd In this episode:- (00:00) - Intro- (03:26) - Whiskey review and rating: Woodinville Straight Bourbon- (09:23) - Apple Podcasts vs Spotify- (11:20) - Spotify video vs YouTube- (13:02) - Podcasting audio vs video- (15:24) - Advice on starting a podcast- (19:24) - Equipment requirements for guests on podcasts- (22:15) - Having a pre-interview interview- (26:06) - Social media and podcasting challenges- (27:37) - How to grow your audience- (33:18) - How to make money as a podcaster- (37:28) - Being yourself vs having a persona- (38:42) - Monetizing your podcast- (42:11) - What's missing from RSS- (43:38) - Dan's non-tech career ideas- (45:40) - Podcast recommendations- (49:12) - Dan's plugsLinks- Woodinville Straight Bourbon: https://woodinvillewhiskeyco.com/- Crafted: https://crafted.fm- WNYC: https://www.wnyc.org/- NYT: https://www.nytimes.com/- Apple Podcasts: https://podcasts.apple.com/- Spotify: https://www.spotify.com/- Pocket Casts: https://pocketcasts.com/- IAB: https://www.iab.com/- National Geographic: https://www.nationalgeographic.com/- Shure SM7B: https://www.shure.com/en-US/products/microphones/sm7b- Focusrite: https://focusrite.com/- Shure MV7: https://www.shure.com/en-US/products/microphones/mv7- Elgato: https://www.elgato.com/- AirPods: https://www.apple.com/airpods/- Audio Technica: https://www.audio-technica.com/en-us/- Morning Edition: https://www.wnyc.org/shows/me- Chicago Public Radio: https://www.wbez.org/- Riverside: https://riverside.fm/- TikTok: https://www.tiktok.com/- Mr. Beast: https://youtube.com/@mrbeast- Docker: https://www.docker.com/- Artium: https://www.thisisartium.com/- Jay Clouse: https://creatorscience.com/- Hark: https://harkaudio.com/- Syntax: https://syntax.fm/- Hard Fork: https://www.nytimes.com/column/hard-fork- Big Technology with Alex Kantrowitz: https://www.bigtechnology.com/- Decoder with Nilay Patel: https://www.theverge.com/decoder- How I Built This: https://www.npr.org/series/490248027/how-i-built-this- Acquired: https://www.acquired.fm/- Smartless: https://smartless.com/- Wondery: https://wondery.com/- Sacha Baron Cohen: https://en.wikipedia.org/wiki/Sacha_Baron_Cohen- Tim Burton: https://en.wikipedia.org/wiki/Tim_Burton- Beetlejuice: https://www.warnerbros.com/movies/beetlejuice- Darknet Diaries: https://darknetdiaries.com/
In this episode of the Ardan Labs Podcast, Bill Kennedy talks with Peter Swimm, founder of Toilville, about the future of AI and workplace automation. Peter shares insights from his long career in technology—from Microsoft to building large-scale conversational AI systems—and discusses the growing need for strong governance around AI usage. They explore privacy and IP concerns, the pitfalls of “vibe coding,” and why documentation and trust logs are essential for safe and effective AI adoption. The conversation also dives into generational shifts in education, the impact of COVID-19 on the job market, and how AI can enhance productivity without replacing authenticity.00:00 Introduction03:09 AI and Workplace Automation05:51 Privacy, IP, and Safety Concerns08:50 Vibe Coding and Development Challenges12:04 Governance and Compliance14:49 Evolution of AI Tools20:53 Early Tech Experiences31:28 Cultural Perspectives on AI36:31 AI and the Future of Education42:44 Career Paths and Experience50:54 Early Conversational AI56:15 COVID-19 and Contact Centers59:00 AI's Future in Business01:02:42 From Microsoft to Entrepreneurship01:13:44 Navigating the AI Landscape01:28:02 Empowering Businesses with AI SolutionsConnect with Peter: LinkedIn: https://www.linkedin.com/in/peterswimm/Mentioned in this Episode:Toilville: https://www.toilville.com/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
In this episode of Hands-On IT, Landon Miles explores the history of servers and enterprise IT infrastructure, from early mainframe computers to cloud computing, Linux servers, virtualization, containers, and AI-driven data centers.This episode connects decades of server evolution into a clear, accessible story, focusing on the people, technologies, and ideas that shaped modern computing. From IBM's System/360 and minicomputers, to Unix and Linux, virtualization, cloud platforms like AWS and Azure, and container orchestration with Docker and Kubernetes, this episode explains how servers became the foundation of today's digital world.Topics covered include: • Server history and early computing systems • IBM mainframes and enterprise computing • Minicomputers and distributed computing • Unix, Linux, and open-source software • Virtualization and data center efficiency • Cloud computing and hyperscale infrastructure • Docker, Kubernetes, and cloud-native architecture • AI workloads, GPUs, and modern server hardwareLandon also highlights key figures in computing history, including Grace Hopper, Ken Olsen, Linus Torvalds, Dave Cutler, Diane Greene, and Jeff Bezos, and explains how their work still influences IT operations today.This episode is part of our December Best Of series, featuring some of our favorite moments and episodes from the past year.Originally aired March 20, 2025.
Alex Kretzschmar joins Adam for a trip down the Linux rabbit hole -- Docker vs Podman, building a Kubernetes cluster, ZFS backups with zfs.rent, bootc, favorite Linux distros, new homelab tools built with AI, self-hosting Immich, content creation, Plex and Jellyfin, the future of piracy and more.
Disney Gone Wild, Docker, AIs, Passkeys, Gogs, React2Shell, Notepad++, Josh Marpet, and More Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-537
Disney Gone Wild, Docker, AIs, Passkeys, Gogs, React2Shell, Notepad++, Josh Marpet, and More Show Notes: https://securityweekly.com/swn-537
Wes and Scott talk about their evolving home-server setups—Synology rigs, Mac minis, Docker vs. VMs, media servers, backups, Cloudflare Tunnels, and the real-world pros and cons of running your own hardware. Show Notes 00:00 Welcome to Syntax! 01:35 Why use a home server? 07:29 Apps for home servers 16:23 Home server hardware 18:27 Brought to you by Sentry.io 20:45 VMs vs containers and choosing the right software 25:53 How to expose services to the internet safely 30:38 Securing access to your server Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads