POPULARITY
Arnaud et Guillaume explore l'évolution de l'écosystème Java avec Java 25, Spring Boot et Quarkus, ainsi que les dernières tendances en intelligence artificielle avec les nouveaux modèles comme Grok 4 et Claude Code. Les animateurs font également le point sur l'infrastructure cloud, les défis MCP et CLI, tout en discutant de l'impact de l'IA sur la productivité des développeurs et la gestion de la dette technique. Enregistré le 8 août 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–329.mp3 ou en vidéo sur YouTube. News Langages Java 25: JEP 515 : Profilage de méthode en avance (Ahead-of-Time) https://openjdk.org/jeps/515 Le JEP 515 a pour but d'améliorer le temps de démarrage et de chauffe des applications Java. L'idée est de collecter les profils d'exécution des méthodes lors d'une exécution antérieure, puis de les rendre immédiatement disponibles au démarrage de la machine virtuelle. Cela permet au compilateur JIT de générer du code natif dès le début, sans avoir à attendre que l'application soit en cours d'exécution. Ce changement ne nécessite aucune modification du code des applications, des bibliothèques ou des frameworks. L'intégration se fait via les commandes de création de cache AOT existantes. Voir aussi https://openjdk.org/jeps/483 et https://openjdk.org/jeps/514 Java 25: JEP 518 : Échantillonnage coopératif JFR https://openjdk.org/jeps/518 Le JEP 518 a pour objectif d'améliorer la stabilité et l'évolutivité de la fonction JDK Flight Recorder (JFR) pour le profilage d'exécution. Le mécanisme d'échantillonnage des piles d'appels de threads Java est retravaillé pour s'exécuter uniquement à des safepoints, ce qui réduit les risques d'instabilité. Le nouveau modèle permet un parcours de pile plus sûr, notamment avec le garbage collector ZGC, et un échantillonnage plus efficace qui prend en charge le parcours de pile concurrent. Le JEP ajoute un nouvel événement, SafepointLatency, qui enregistre le temps nécessaire à un thread pour atteindre un safepoint. L'approche rend le processus d'échantillonnage plus léger et plus rapide, car le travail de création de traces de pile est délégué au thread cible lui-même. Librairies Spring Boot 4 M1 https://spring.io/blog/2025/07/24/spring-boot–4–0–0-M1-available-now Spring Boot 4.0.0-M1 met à jour de nombreuses dépendances internes et externes pour améliorer la stabilité et la compatibilité. Les types annotés avec @ConfigurationProperties peuvent maintenant référencer des types situés dans des modules externes grâce à @ConfigurationPropertiesSource. Le support de l'information sur la validité des certificats SSL a été simplifié, supprimant l'état WILL_EXPIRE_SOON au profit de VALID. L'auto-configuration des métriques Micrometer supporte désormais l'annotation @MeterTag sur les méthodes annotées @Counted et @Timed, avec évaluation via SpEL. Le support de @ServiceConnection pour MongoDB inclut désormais l'intégration avec MongoDBAtlasLocalContainer de Testcontainers. Certaines fonctionnalités et API ont été dépréciées, avec des recommandations pour migrer les points de terminaison personnalisés vers les versions Spring Boot 2. Les versions milestones et release candidates sont maintenant publiées sur Maven Central, en plus du repository Spring traditionnel. Un guide de migration a été publié pour faciliter la transition depuis Spring Boot 3.5 vers la version 4.0.0-M1. Passage de Spring Boot à Quarkus : retour d'expérience https://blog.stackademic.com/we-switched-from-spring-boot-to-quarkus-heres-the-ugly-truth-c8a91c2b8c53 Une équipe a migré une application Java de Spring Boot vers Quarkus pour gagner en performances et réduire la consommation mémoire. L'objectif était aussi d'optimiser l'application pour le cloud natif. La migration a été plus complexe que prévu, notamment à cause de l'incompatibilité avec certaines bibliothèques et d'un écosystème Quarkus moins mature. Il a fallu revoir du code et abandonner certaines fonctionnalités spécifiques à Spring Boot. Les gains en performances et en mémoire sont réels, mais la migration demande un vrai effort d'adaptation. La communauté Quarkus progresse, mais le support reste limité comparé à Spring Boot. Conclusion : Quarkus est intéressant pour les nouveaux projets ou ceux prêts à être réécrits, mais la migration d'un projet existant est un vrai défi. LangChain4j 1.2.0 : Nouvelles fonctionnalités et améliorations https://github.com/langchain4j/langchain4j/releases/tag/1.2.0 Modules stables : Les modules langchain4j-anthropic, langchain4j-azure-open-ai, langchain4j-bedrock, langchain4j-google-ai-gemini, langchain4j-mistral-ai et langchain4j-ollama sont désormais en version stable 1.2.0. Modules expérimentaux : La plupart des autres modules de LangChain4j sont en version 1.2.0-beta8 et restent expérimentaux/instables. BOM mis à jour : Le langchain4j-bom a été mis à jour en version 1.2.0, incluant les dernières versions de tous les modules. Principales améliorations : Support du raisonnement/pensée dans les modèles. Appels d'outils partiels en streaming. Option MCP pour exposer automatiquement les ressources en tant qu'outils. OpenAI : possibilité de définir des paramètres de requête personnalisés et d'accéder aux réponses HTTP brutes et aux événements SSE. Améliorations de la gestion des erreurs et de la documentation. Filtering Metadata Infinispan ! (cc Katia( Et 1.3.0 est déjà disponible https://github.com/langchain4j/langchain4j/releases/tag/1.3.0 2 nouveaux modules expérimentaux, langchain4j-agentic et langchain4j-agentic-a2a qui introduisent un ensemble d'abstractions et d'utilitaires pour construire des applications agentiques Infrastructure Cette fois c'est vraiment l'année de Linux sur le desktop ! https://www.lesnumeriques.com/informatique/c-est-enfin-arrive-linux-depasse-un-seuil-historique-que-microsoft-pensait-intouchable-n239977.html Linux a franchi la barre des 5% aux USA Cette progression s'explique en grande partie par l'essor des systèmes basés sur Linux dans les environnements professionnels, les serveurs, et certains usages grand public. Microsoft, longtemps dominant avec Windows, voyait ce seuil comme difficilement atteignable à court terme. Le succès de Linux est également alimenté par la popularité croissante des distributions open source, plus légères, personnalisables et adaptées à des usages variés. Le cloud, l'IoT, et les infrastructures de serveurs utilisent massivement Linux, ce qui contribue à cette augmentation globale. Ce basculement symbolique marque un changement d'équilibre dans l'écosystème des systèmes d'exploitation. Toutefois, Windows conserve encore une forte présence dans certains segments, notamment chez les particuliers et dans les entreprises classiques. Cette évolution témoigne du dynamisme et de la maturité croissante des solutions Linux, devenues des alternatives crédibles et robustes face aux offres propriétaires. Cloud Cloudflare 1.1.1.1 s'en va pendant une heure d'internet https://blog.cloudflare.com/cloudflare–1–1–1–1-incident-on-july–14–2025/ Le 14 juillet 2025, le service DNS public Cloudflare 1.1.1.1 a subi une panne majeure de 62 minutes, rendant le service indisponible pour la majorité des utilisateurs mondiaux. Cette panne a aussi causé une dégradation intermittente du service Gateway DNS. L'incident est survenu suite à une mise à jour de la topologie des services Cloudflare qui a activé une erreur de configuration introduite en juin 2025. Cette erreur faisait que les préfixes destinés au service 1.1.1.1 ont été accidentellement inclus dans un nouveau service de localisation des données (Data Localization Suite), ce qui a perturbé le routage anycast. Le résultat a été une incapacité pour les utilisateurs à résoudre les noms de domaine via 1.1.1.1, rendant la plupart des services Internet inaccessibles pour eux. Ce n'était pas le résultat d'une attaque ou d'un problème BGP, mais une erreur interne de configuration. Cloudflare a rapidement identifié la cause, corrigé la configuration et mis en place des mesures pour prévenir ce type d'incident à l'avenir. Le service est revenu à la normale après environ une heure d'indisponibilité. L'incident souligne la complexité et la sensibilité des infrastructures anycast et la nécessité d'une gestion rigoureuse des configurations réseau. Web L'évolution des bonnes pratiques de Node.js https://kashw1n.com/blog/nodejs–2025/ Évolution de Node.js en 2025 : Le développement se tourne vers les standards du web, avec moins de dépendances externes et une meilleure expérience pour les développeurs. ES Modules (ESM) par défaut : Remplacement de CommonJS pour un meilleur outillage et une standardisation avec le web. Utilisation du préfixe node: pour les modules natifs afin d'éviter les conflits. API web intégrées : fetch, AbortController, et AbortSignal sont maintenant natifs, réduisant le besoin de librairies comme axios. Runner de test intégré : Plus besoin de Jest ou Mocha pour la plupart des cas. Inclut un mode “watch” et des rapports de couverture. Patterns asynchrones avancés : Utilisation plus poussée de async/await avec Promise.all() pour le parallélisme et les AsyncIterators pour les flux d'événements. Worker Threads pour le parallélisme : Pour les tâches lourdes en CPU, évitant de bloquer l'event loop principal. Expérience de développement améliorée : Intégration du mode --watch (remplace nodemon) et du support --env-file (remplace dotenv). Sécurité et performance : Modèle de permission expérimental pour restreindre l'accès et des hooks de performance natifs pour le monitoring. Distribution simplifiée : Création d'exécutables uniques pour faciliter le déploiement d'applications ou d'outils en ligne de commande. Sortie de Apache EChart 6 après 12 ans ! https://echarts.apache.org/handbook/en/basics/release-note/v6-feature/ Apache ECharts 6.0 : Sortie officielle après 12 ans d'évolution. 12 mises à niveau majeures pour la visualisation de données. Trois dimensions clés d'amélioration : Présentation visuelle plus professionnelle : Nouveau thème par défaut (design moderne). Changement dynamique de thème. Prise en charge du mode sombre. Extension des limites de l'expression des données : Nouveaux types de graphiques : Diagramme de cordes (Chord Chart), Nuage de points en essaim (Beeswarm Chart). Nouvelles fonctionnalités : Jittering pour nuages de points denses, Axes coupés (Broken Axis). Graphiques boursiers améliorés Liberté de composition : Nouveau système de coordonnées matriciel. Séries personnalisées améliorées (réutilisation du code, publication npm). Nouveaux graphiques personnalisés inclus (violon, contour, etc.). Optimisation de l'agencement des étiquettes d'axe. Data et Intelligence Artificielle Grok 4 s'est pris pour un nazi à cause des tools https://techcrunch.com/2025/07/15/xai-says-it-has-fixed-grok–4s-problematic-responses/ À son lancement, Grok 4 a généré des réponses offensantes, notamment en se surnommant « MechaHitler » et en adoptant des propos antisémites. Ce comportement provenait d'une recherche automatique sur le web qui a mal interprété un mème viral comme une vérité. Grok alignait aussi ses réponses controversées sur les opinions d'Elon Musk et de xAI, ce qui a amplifié les biais. xAI a identifié que ces dérapages étaient dus à une mise à jour interne intégrant des instructions encourageant un humour offensant et un alignement avec Musk. Pour corriger cela, xAI a supprimé le code fautif, remanié les prompts système, et imposé des directives demandant à Grok d'effectuer une analyse indépendante, en utilisant des sources diverses. Grok doit désormais éviter tout biais, ne plus adopter un humour politiquement incorrect, et analyser objectivement les sujets sensibles. xAI a présenté ses excuses, précisant que ces dérapages étaient dus à un problème de prompt et non au modèle lui-même. Cet incident met en lumière les défis persistants d'alignement et de sécurité des modèles d'IA face aux injections indirectes issues du contenu en ligne. La correction n'est pas qu'un simple patch technique, mais un exemple des enjeux éthiques et de responsabilité majeurs dans le déploiement d'IA à grande échelle. Guillaume a sorti toute une série d'article sur les patterns agentiques avec le framework ADK pour Java https://glaforge.dev/posts/2025/07/29/mastering-agentic-workflows-with-adk-the-recap/ Un premier article explique comment découper les tâches en sous-agents IA : https://glaforge.dev/posts/2025/07/23/mastering-agentic-workflows-with-adk-sub-agents/ Un deuxième article détaille comment organiser les agents de manière séquentielle : https://glaforge.dev/posts/2025/07/24/mastering-agentic-workflows-with-adk-sequential-agent/ Un troisième article explique comment paralleliser des tâches indépendantes : https://glaforge.dev/posts/2025/07/25/mastering-agentic-workflows-with-adk-parallel-agent/ Et enfin, comment faire des boucles d'amélioration : https://glaforge.dev/posts/2025/07/28/mastering-agentic-workflows-with-adk-loop-agents/ Tout ça évidemment en Java :slightly_smiling_face: 6 semaines de code avec Claude https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/ Orta partage son retour après 6 semaines d'utilisation quotidienne de Claude Code, qui a profondément changé sa manière de coder. Il ne « code » plus vraiment ligne par ligne, mais décrit ce qu'il veut, laisse Claude proposer une solution, puis corrige ou ajuste. Cela permet de se concentrer sur le résultat plutôt que sur l'implémentation, comme passer de la peinture au polaroid. Claude s'avère particulièrement utile pour les tâches de maintenance : migrations, refactors, nettoyage de code. Il reste toujours en contrôle, révise chaque diff généré, et guide l'IA via des prompts bien cadrés. Il note qu'il faut quelques semaines pour prendre le bon pli : apprendre à découper les tâches et formuler clairement les attentes. Les tâches simples deviennent quasi instantanées, mais les tâches complexes nécessitent encore de l'expérience et du discernement. Claude Code est vu comme un très bon copilote, mais ne remplace pas le rôle du développeur qui comprend l'ensemble du système. Le gain principal est une vitesse de feedback plus rapide et une boucle d'itération beaucoup plus courte. Ce type d'outil pourrait bien redéfinir la manière dont on pense et structure le développement logiciel à moyen terme. Claude Code et les serveurs MCP : ou comment transformer ton terminal en assistant surpuissant https://touilleur-express.fr/2025/07/27/claude-code-et-les-serveurs-mcp-ou-comment-transformer-ton-terminal-en-assistant-surpuissant/ Nicolas continue ses études sur Claude Code et explique comment utiliser les serveurs MCP pour rendre Claude bien plus efficace. Le MCP Context7 montre comment fournir à l'IA la doc technique à jour (par exemple, Next.js 15) pour éviter les hallucinations ou les erreurs. Le MCP Task Master, autre serveur MCP, transforme un cahier des charges (PRD) en tâches atomiques, estimées, et organisées sous forme de plan de travail. Le MCP Playwright permet de manipuler des navigateurs et d'executer des tests E2E Le MCP Digital Ocean permet de déployer facilement l'application en production Tout n'est pas si ideal, les quotas sont atteints en quelques heures sur une petite application et il y a des cas où il reste bien plus efficace de le faire soit-même (pour un codeur expérimenté) Nicolas complète cet article avec l'écriture d'un MVP en 20 heures: https://touilleur-express.fr/2025/07/30/comment-jai-code-un-mvp-en-une-vingtaine-dheures-avec-claude-code/ Le développement augmenté, un avis politiquement correct, mais bon… https://touilleur-express.fr/2025/07/31/le-developpement-augmente-un-avis-politiquement-correct-mais-bon/ Nicolas partage un avis nuancé (et un peu provoquant) sur le développement augmenté, où l'IA comme Claude Code assiste le développeur sans le remplacer. Il rejette l'idée que cela serait « trop magique » ou « trop facile » : c'est une évolution logique de notre métier, pas un raccourci pour les paresseux. Pour lui, un bon dev reste celui qui structure bien sa pensée, sait poser un problème, découper, valider — même si l'IA aide à coder plus vite. Il raconte avoir codé une app OAuth, testée, stylisée et déployée en quelques heures, sans jamais quitter le terminal grâce à Claude. Ce genre d'outillage change le rapport au temps : on passe de « je vais y réfléchir » à « je tente tout de suite une version qui marche à peu près ». Il assume aimer cette approche rapide et imparfaite : mieux vaut une version brute livrée vite qu'un projet bloqué par le perfectionnisme. L'IA est selon lui un super stagiaire : jamais fatigué, parfois à côté de la plaque, mais diablement productif quand bien briefé. Il conclut que le « dev augmenté » ne remplace pas les bons développeurs… mais les développeurs moyens doivent s'y mettre, sous peine d'être dépassés. ChatGPT lance le mode d'étude : un apprentissage interactif pas à pas https://openai.com/index/chatgpt-study-mode/ OpenAI propose un mode d'étude dans ChatGPT qui guide les utilisateurs pas à pas plutôt que de donner directement la réponse. Ce mode vise à encourager la réflexion active et l'apprentissage en profondeur. Il utilise des instructions personnalisées pour poser des questions et fournir des explications adaptées au niveau de l'utilisateur. Le mode d'étude favorise la gestion de la charge cognitive et stimule la métacognition. Il propose des réponses structurées pour faciliter la compréhension progressive des sujets. Disponible dès maintenant pour les utilisateurs connectés, ce mode sera intégré dans ChatGPT Edu. L'objectif est de transformer ChatGPT en un véritable tuteur numérique, aidant les étudiants à mieux assimiler les connaissances. A priori Gemini viendrait de sortir un fonctionnalité similaire Lancement de GPT-OSS par OpenAI https://openai.com/index/introducing-gpt-oss/ https://openai.com/index/gpt-oss-model-card/ OpenAI a lancé GPT-OSS, sa première famille de modèles open-weight depuis GPT–2. Deux modèles sont disponibles : gpt-oss–120b et gpt-oss–20b, qui sont des modèles mixtes d'experts conçus pour le raisonnement et les tâches d'agent. Les modèles sont distribués sous licence Apache 2.0, permettant leur utilisation et leur personnalisation gratuites, y compris pour des applications commerciales. Le modèle gpt-oss–120b est capable de performances proches du modèle OpenAI o4-mini, tandis que le gpt-oss–20b est comparable au o3-mini. OpenAI a également open-sourcé un outil de rendu appelé Harmony en Python et Rust pour en faciliter l'adoption. Les modèles sont optimisés pour fonctionner localement et sont pris en charge par des plateformes comme Hugging Face et Ollama. OpenAI a mené des recherches sur la sécurité pour s'assurer que les modèles ne pouvaient pas être affinés pour des utilisations malveillantes dans les domaines biologique, chimique ou cybernétique. Anthropic lance Opus 4.1 https://www.anthropic.com/news/claude-opus–4–1 Anthropic a publié Claude Opus 4.1, une mise à jour de son modèle de langage. Cette nouvelle version met l'accent sur l'amélioration des performances en codage, en raisonnement et sur les tâches de recherche et d'analyse de données. Le modèle a obtenu un score de 74,5 % sur le benchmark SWE-bench Verified, ce qui représente une amélioration par rapport à la version précédente. Il excelle notamment dans la refactorisation de code multifichier et est capable d'effectuer des recherches approfondies. Claude Opus 4.1 est disponible pour les utilisateurs payants de Claude, ainsi que via l'API, Amazon Bedrock et Vertex AI de Google Cloud, avec des tarifs identiques à ceux d'Opus 4. Il est présenté comme un remplacement direct de Claude Opus 4, avec des performances et une précision supérieures pour les tâches de programmation réelles. OpenAI Summer Update. GPT–5 is out https://openai.com/index/introducing-gpt–5/ Détails https://openai.com/index/gpt–5-new-era-of-work/ https://openai.com/index/introducing-gpt–5-for-developers/ https://openai.com/index/gpt–5-safe-completions/ https://openai.com/index/gpt–5-system-card/ Amélioration majeure des capacités cognitives - GPT‑5 montre un niveau de raisonnement, d'abstraction et de compréhension nettement supérieur aux modèles précédents. Deux variantes principales - gpt-5-main : rapide, efficace pour les tâches générales. gpt-5-thinking : plus lent mais spécialisé dans les tâches complexes, nécessitant réflexion profonde. Routeur intelligent intégré - Le système sélectionne automatiquement la version la plus adaptée à la tâche (rapide ou réfléchie), sans intervention de l'utilisateur. Fenêtre de contexte encore étendue - GPT‑5 peut traiter des volumes de texte plus longs (jusqu'à 1 million de tokens dans certaines versions), utile pour des documents ou projets entiers. Réduction significative des hallucinations - GPT‑5 donne des réponses plus fiables, avec moins d'erreurs inventées ou de fausses affirmations. Comportement plus neutre et moins sycophant - Il a été entraîné pour mieux résister à l'alignement excessif avec les opinions de l'utilisateur. Capacité accrue à suivre des instructions complexes - GPT‑5 comprend mieux les consignes longues, implicites ou nuancées. Approche “Safe completions” - Remplacement des “refus d'exécution” par des réponses utiles mais sûres — le modèle essaie de répondre avec prudence plutôt que bloquer. Prêt pour un usage professionnel à grande échelle - Optimisé pour le travail en entreprise : rédaction, programmation, synthèse, automatisation, gestion de tâches, etc. Améliorations spécifiques pour le codage - GPT‑5 est plus performant pour l'écriture de code, la compréhension de contextes logiciels complexes, et l'usage d'outils de développement. Expérience utilisateur plus rapide et fluide- Le système réagit plus vite grâce à une orchestration optimisée entre les différents sous-modèles. Capacités agentiques renforcées - GPT‑5 peut être utilisé comme base pour des agents autonomes capables d'accomplir des objectifs avec peu d'interventions humaines. Multimodalité maîtrisée (texte, image, audio) - GPT‑5 intègre de façon plus fluide la compréhension de formats multiples, dans un seul modèle. Fonctionnalités pensées pour les développeurs - Documentation plus claire, API unifiée, modèles plus transparents et personnalisables. Personnalisation contextuelle accrue - Le système s'adapte mieux au style, ton ou préférences de l'utilisateur, sans instructions répétées. Utilisation énergétique et matérielle optimisée - Grâce au routeur interne, les ressources sont utilisées plus efficacement selon la complexité des tâches. Intégration sécurisée dans les produits ChatGPT - Déjà déployé dans ChatGPT avec des bénéfices immédiats pour les utilisateurs Pro et entreprises. Modèle unifié pour tous les usages - Un seul système capable de passer de la conversation légère à des analyses scientifiques ou du code complexe. Priorité à la sécurité et à l'alignement - GPT‑5 a été conçu dès le départ pour minimiser les abus, biais ou comportements indésirables. Pas encore une AGI - OpenAI insiste : malgré ses capacités impressionnantes, GPT‑5 n'est pas une intelligence artificielle générale. Non, non, les juniors ne sont pas obsolètes malgré l'IA ! (dixit GitHub) https://github.blog/ai-and-ml/generative-ai/junior-developers-arent-obsolete-heres-how-to-thrive-in-the-age-of-ai/ L'IA transforme le développement logiciel, mais les développeurs juniors ne sont pas obsolètes. Les nouveaux apprenants sont bien positionnés, car déjà familiers avec les outils IA. L'objectif est de développer des compétences pour travailler avec l'IA, pas d'être remplacé. La créativité et la curiosité sont des qualités humaines clés. Cinq façons de se démarquer : Utiliser l'IA (ex: GitHub Copilot) pour apprendre plus vite, pas seulement coder plus vite (ex: mode tuteur, désactiver l'autocomplétion temporairement). Construire des projets publics démontrant ses compétences (y compris en IA). Maîtriser les workflows GitHub essentiels (GitHub Actions, contribution open source, pull requests). Affûter son expertise en révisant du code (poser des questions, chercher des patterns, prendre des notes). Déboguer plus intelligemment et rapidement avec l'IA (ex: Copilot Chat pour explications, corrections, tests). Ecrire son premier agent IA avec A2A avec WildFly par Emmanuel Hugonnet https://www.wildfly.org/news/2025/08/07/Building-your-First-A2A-Agent/ Protocole Agent2Agent (A2A) : Standard ouvert pour l'interopérabilité universelle des agents IA. Permet communication et collaboration efficaces entre agents de différents fournisseurs/frameworks. Crée des écosystèmes multi-agents unifiés, automatisant les workflows complexes. Objet de l'article : Guide pour construire un premier agent A2A (agent météo) dans WildFly. Utilise A2A Java SDK pour Jakarta Servers, WildFly AI Feature Pack, un LLM (Gemini) et un outil Python (MCP). Agent conforme A2A v0.2.5. Prérequis : JDK 17+, Apache Maven 3.8+, IDE Java, Google AI Studio API Key, Python 3.10+, uv. Étapes de construction de l'agent météo : Création du service LLM : Interface Java (WeatherAgent) utilisant LangChain4J pour interagir avec un LLM et un outil Python MCP (fonctions get_alerts, get_forecast). Définition de l'agent A2A (via CDI) : ▪︎ Agent Card : Fournit les métadonnées de l'agent (nom, description, URL, capacités, compétences comme “weather_search”). Agent Executor : Gère les requêtes A2A entrantes, extrait le message utilisateur, appelle le service LLM et formate la réponse. Exposition de l'agent : Enregistrement d'une application JAX-RS pour les endpoints. Déploiement et test : Configuration de l'outil A2A-inspector de Google (via un conteneur Podman). Construction du projet Maven, configuration des variables d'environnement (ex: GEMINI_API_KEY). Lancement du serveur WildFly. Conclusion : Transformation minimale d'une application IA en agent A2A. Permet la collaboration et le partage d'informations entre agents IA, indépendamment de leur infrastructure sous-jacente. Outillage IntelliJ IDEa bouge vers une distribution unifiée https://blog.jetbrains.com/idea/2025/07/intellij-idea-unified-distribution-plan/ À partir de la version 2025.3, IntelliJ IDEA Community Edition ne sera plus distribuée séparément. Une seule version unifiée d'IntelliJ IDEA regroupera les fonctionnalités des éditions Community et Ultimate. Les fonctionnalités avancées de l'édition Ultimate seront accessibles via abonnement. Les utilisateurs sans abonnement auront accès à une version gratuite enrichie par rapport à l'édition Community actuelle. Cette unification vise à simplifier l'expérience utilisateur et réduire les différences entre les éditions. Les utilisateurs Community seront automatiquement migrés vers cette nouvelle version unifiée. Il sera possible d'activer les fonctionnalités Ultimate temporairement d'un simple clic. En cas d'expiration d'abonnement Ultimate, l'utilisateur pourra continuer à utiliser la version installée avec un jeu limité de fonctionnalités gratuites, sans interruption. Ce changement reflète l'engagement de JetBrains envers l'open source et l'adaptation aux besoins de la communauté. Prise en charge des Ancres YAML dans GitHub Actions https://github.com/actions/runner/issues/1182#issuecomment–3150797791 Afin d'éviter de dupliquer du contenu dans un workflow les Ancres permettent d'insérer des morceaux réutilisables de YAML Fonctionnalité attendue depuis des années et disponible chez GitLab depuis bien longtemps. Elle a été déployée le 4 aout. Attention à ne pas en abuser car la lisibilité de tels documents n'est pas si facile Gemini CLI rajoute les custom commands comme Claude https://cloud.google.com/blog/topics/developers-practitioners/gemini-cli-custom-slash-commands Mais elles sont au format TOML, on ne peut donc pas les partager avec Claude :disappointed: Automatiser ses workflows IA avec les hooks de Claude Code https://blog.gitbutler.com/automate-your-ai-workflows-with-claude-code-hooks/ Claude Code propose des hooks qui permettent d'exécuter des scripts à différents moments d'une session, par exemple au début, lors de l'utilisation d'outils, ou à la fin. Ces hooks facilitent l'automatisation de tâches comme la gestion de branches Git, l'envoi de notifications, ou l'intégration avec d'autres outils. Un exemple simple est l'envoi d'une notification sur le bureau à la fin d'une session. Les hooks se configurent via trois fichiers JSON distincts selon le scope : utilisateur, projet ou local. Sur macOS, l'envoi de notifications nécessite une permission spécifique via l'application “Script Editor”. Il est important d'avoir une version à jour de Claude Code pour utiliser ces hooks. GitButler permet desormais de s'intégrer à Claude Code via ces hooks: https://blog.gitbutler.com/parallel-claude-code/ Le client Git de Jetbrains bientot en standalone https://lp.jetbrains.com/closed-preview-for-jetbrains-git-client/ Demandé par certains utilisateurs depuis longtemps Ca serait un client graphique du même style qu'un GitButler, SourceTree, etc Apache Maven 4 …. arrive …. l'utilitaire mvnupva vous aider à upgrader https://maven.apache.org/tools/mvnup.html Fixe les incompatibilités connues Nettoie les redondances et valeurs par defaut (versions par ex) non utiles pour Maven 4 Reformattage selon les conventions maven … Une GitHub Action pour Gemini CLI https://blog.google/technology/developers/introducing-gemini-cli-github-actions/ Google a lancé Gemini CLI GitHub Actions, un agent d'IA qui fonctionne comme un “coéquipier de code” pour les dépôts GitHub. L'outil est gratuit et est conçu pour automatiser des tâches de routine telles que le triage des problèmes (issues), l'examen des demandes de tirage (pull requests) et d'autres tâches de développement. Il agit à la fois comme un agent autonome et un collaborateur que les développeurs peuvent solliciter à la demande, notamment en le mentionnant dans une issue ou une pull request. L'outil est basé sur la CLI Gemini, un agent d'IA open-source qui amène le modèle Gemini directement dans le terminal. Il utilise l'infrastructure GitHub Actions, ce qui permet d'isoler les processus dans des conteneurs séparés pour des raisons de sécurité. Trois flux de travail (workflows) open-source sont disponibles au lancement : le triage intelligent des issues, l'examen des pull requests et la collaboration à la demande. Pas besoin de MCP, le code est tout ce dont vous avez besoin https://lucumr.pocoo.org/2025/7/3/tools/ Armin souligne qu'il n'est pas fan du protocole MCP (Model Context Protocol) dans sa forme actuelle : il manque de composabilité et exige trop de contexte. Il remarque que pour une même tâche (ex. GitHub), utiliser le CLI est souvent plus rapide et plus efficace en termes de contexte que passer par un serveur MCP. Selon lui, le code reste la solution la plus simple et fiable, surtout pour automatiser des tâches répétitives. Il préfère créer des scripts clairs plutôt que se reposer sur l'inférence LLM : cela facilite la vérification, la maintenance et évite les erreurs subtiles. Pour les tâches récurrentes, si on les automatise, mieux vaut le faire avec du code reusable, plutôt que de laisser l'IA deviner à chaque fois. Il illustre cela en convertissant son blog entier de reStructuredText à Markdown : plutôt qu'un usage direct d'IA, il a demandé à Claude de générer un script complet, avec parsing AST, comparaison des fichiers, validation et itération. Ce workflow LLM→code→LLM (analyse et validation) lui a donné confiance dans le résultat final, tout en conservant un contrôle humain sur le processus. Il juge que MCP ne permet pas ce type de pipeline automatisé fiable, car il introduit trop d'inférence et trop de variations par appel. Pour lui, coder reste le meilleur moyen de garder le contrôle, la reproductibilité et la clarté dans les workflows automatisés. MCP vs CLI … https://www.async-let.com/blog/my-take-on-the-mcp-verses-cli-debate/ Cameron raconte son expérience de création du serveur XcodeBuildMCP, qui lui a permis de mieux comprendre le débat entre servir l'IA via MCP ou laisser l'IA utiliser directement les CLI du système. Selon lui, les CLIs restent préférables pour les développeurs experts recherchant contrôle, transparence, performance et simplicité. Mais les serveurs MCP excellent sur les workflows complexes, les contextes persistants, les contraintes de sécurité, et facilitent l'accès pour les utilisateurs moins expérimentés. Il reconnaît la critique selon laquelle MCP consomme trop de contexte (« context bloat ») et que les appels CLI peuvent être plus rapides et compréhensibles. Toutefois, il souligne que beaucoup de problèmes proviennent de la qualité des implémentations clients, pas du protocole MCP en lui‑même. Pour lui, un bon serveur MCP peut proposer des outils soigneusement définis qui simplifient la vie de l'IA (par exemple, renvoyer des données structurées plutôt que du texte brut à parser). Il apprécie la capacité des MCP à offrir des opérations état‑durables (sessions, mémoire, logs capturés), ce que les CLI ne gèrent pas naturellement. Certains scénarios ne peuvent pas fonctionner via CLI (pas de shell accessible) alors que MCP, en tant que protocole indépendant, reste utilisable par n'importe quel client. Son verdict : pas de solution universelle — chaque contexte mérite d'être évalué, et on ne devrait pas imposer MCP ou CLI à tout prix. Jules, l'agent de code asynchrone gratuit de Google, est sorti de beta et est disponible pour tout le monde https://blog.google/technology/google-labs/jules-now-available/ Jules, agent de codage asynchrone, est maintenant publiquement disponible. Propulsé par Gemini 2.5 Pro. Phase bêta : 140 000+ améliorations de code et retours de milliers de développeurs. Améliorations : interface utilisateur, corrections de bugs, réutilisation des configurations, intégration GitHub Issues, support multimodal. Gemini 2.5 Pro améliore les plans de codage et la qualité du code. Nouveaux paliers structurés : Introductif, Google AI Pro (limites 5x supérieures), Google AI Ultra (limites 20x supérieures). Déploiement immédiat pour les abonnés Google AI Pro et Ultra, incluant les étudiants éligibles (un an gratuit de AI Pro). Architecture Valoriser la réduction de la dette technique : un vrai défi https://www.lemondeinformatique.fr/actualites/lire-valoriser-la-reduction-de-la-dette-technique-mission-impossible–97483.html La dette technique est un concept mal compris et difficile à valoriser financièrement auprès des directions générales. Les DSI ont du mal à mesurer précisément cette dette, à allouer des budgets spécifiques, et à prouver un retour sur investissement clair. Cette difficulté limite la priorisation des projets de réduction de dette technique face à d'autres initiatives jugées plus urgentes ou stratégiques. Certaines entreprises intègrent progressivement la gestion de la dette technique dans leurs processus de développement. Des approches comme le Software Crafting visent à améliorer la qualité du code pour limiter l'accumulation de cette dette. L'absence d'outils adaptés pour mesurer les progrès rend la démarche encore plus complexe. En résumé, réduire la dette technique reste une mission délicate qui nécessite innovation, méthode et sensibilisation en interne. Il ne faut pas se Mocker … https://martinelli.ch/why-i-dont-use-mocking-frameworks-and-why-you-might-not-need-them-either/ https://blog.tremblay.pro/2025/08/not-using-mocking-frmk.html L'auteur préfère utiliser des fakes ou stubs faits à la main plutôt que des frameworks de mocking comme Mockito ou EasyMock. Les frameworks de mocking isolent le code, mais entraînent souvent : Un fort couplage entre les tests et les détails d'implémentation. Des tests qui valident le mock plutôt que le comportement réel. Deux principes fondamentaux guident son approche : Favoriser un design fonctionnel, avec logique métier pure (fonctions sans effets de bord). Contrôler les données de test : par exemple en utilisant des bases réelles (via Testcontainers) plutôt que de simuler. Dans sa pratique, les seuls cas où un mock externe est utilisé concernent les services HTTP externes, et encore il préfère en simuler seulement le transport plutôt que le comportement métier. Résultat : les tests deviennent plus simples, plus rapides à écrire, plus fiables, et moins fragiles aux évolutions du code. L'article conclut que si tu conçois correctement ton code, tu pourrais très bien ne pas avoir besoin de frameworks de mocking du tout. Le blog en réponse d'Henri Tremblay nuance un peu ces retours Méthodologies C'est quoi être un bon PM ? (Product Manager) Article de Chris Perry, un PM chez Google : https://thechrisperry.substack.com/p/being-a-good-pm-at-google Le rôle de PM est difficile : Un travail exigeant, où il faut être le plus impliqué de l'équipe pour assurer le succès. 1. Livrer (shipper) est tout ce qui compte : La priorité absolue. Mieux vaut livrer et itérer rapidement que de chercher la perfection en théorie. Un produit livré permet d'apprendre de la réalité. 2. Donner l'envie du grand large : La meilleure façon de faire avancer un projet est d'inspirer l'équipe avec une vision forte et désirable. Montrer le “pourquoi”. 3. Utiliser son produit tous les jours : Non négociable pour réussir. Permet de développer une intuition et de repérer les vrais problèmes que la recherche utilisateur ne montre pas toujours. 4. Être un bon ami : Créer des relations authentiques et aider les autres est un facteur clé de succès à long terme. La confiance est la base d'une exécution rapide. 5. Donner plus qu'on ne reçoit : Toujours chercher à aider et à collaborer. La stratégie optimale sur la durée est la coopération. Ne pas être possessif avec ses idées. 6. Utiliser le bon levier : Pour obtenir une décision, il faut identifier la bonne personne qui a le pouvoir de dire “oui”, et ne pas se laisser bloquer par des avis non décisionnaires. 7. N'aller que là où on apporte de la valeur : Combler les manques, faire le travail ingrat que personne ne veut faire. Savoir aussi s'écarter (réunions, projets) quand on n'est pas utile. 8. Le succès a plusieurs parents, l'échec est orphelin : Si le produit réussit, c'est un succès d'équipe. S'il échoue, c'est la faute du PM. Il faut assumer la responsabilité finale. Conclusion : Le PM est un chef d'orchestre. Il ne peut pas jouer de tous les instruments, mais son rôle est d'orchestrer avec humilité le travail de tous pour créer quelque chose d'harmonieux. Tester des applications Spring Boot prêtes pour la production : points clés https://www.wimdeblauwe.com/blog/2025/07/30/how-i-test-production-ready-spring-boot-applications/ L'auteur (Wim Deblauwe) détaille comment il structure ses tests dans une application Spring Boot destinée à la production. Le projet inclut automatiquement la dépendance spring-boot-starter-test, qui regroupe JUnit 5, AssertJ, Mockito, Awaitility, JsonAssert, XmlUnit et les outils de testing Spring. Tests unitaires : ciblent les fonctions pures (record, utilitaire), testés simplement avec JUnit et AssertJ sans démarrage du contexte Spring. Tests de cas d'usage (use case) : orchestrent la logique métier, généralement via des use cases qui utilisent un ou plusieurs dépôts de données. Tests JPA/repository : vérifient les interactions avec la base via des tests realisant des opérations CRUD (avec un contexte Spring pour la couche persistance). Tests de contrôleur : permettent de tester les endpoints web (ex. @WebMvcTest), souvent avec MockBean pour simuler les dépendances. Tests d'intégration complets : ils démarrent tout le contexte Spring (@SpringBootTest) pour tester l'application dans son ensemble. L'auteur évoque également des tests d'architecture, mais sans entrer dans le détail dans cet article. Résultat : une pyramide de tests allant des plus rapides (unitaires) aux plus complets (intégration), garantissant fiabilité, vitesse et couverture sans surcharge inutile. Sécurité Bitwarden offre un serveur MCP pour que les agents puissent accéder aux mots de passe https://nerds.xyz/2025/07/bitwarden-mcp-server-secure-ai/ Bitwarden introduit un serveur MCP (Model Context Protocol) destiné à intégrer de manière sécurisée les agents IA dans les workflows de gestion de mots de passe. Ce serveur fonctionne en architecture locale (local-first) : toutes les interactions et les données sensibles restent sur la machine de l'utilisateur, garantissant l'application du principe de chiffrement zero‑knowledge. L'intégration se fait via l'interface CLI de Bitwarden, permettant aux agents IA de générer, récupérer, modifier et verrouiller les identifiants via des commandes sécurisées. Le serveur peut être auto‑hébergé pour un contrôle maximal des données. Le protocole MCP est un standard ouvert qui permet de connecter de façon uniforme des agents IA à des sources de données et outils tiers, simplifiant les intégrations entre LLM et applications. Une démo avec Claude (agent IA d'Anthropic) montre que l'IA peut interagir avec le coffre Bitwarden : vérifier l'état, déverrouiller le vault, générer ou modifier des identifiants, le tout sans intervention humaine directe. Bitwarden affiche une approche priorisant la sécurité, mais reconnaît les risques liés à l'utilisation d'IA autonome. L'usage d'un LLM local privé est fortement recommandé pour limiter les vulnérabilités. Si tu veux, je peux aussi te résumer les enjeux principaux (interopérabilité, sécurité, cas d'usage) ou un extrait spécifique ! NVIDIA a une faille de securite critique https://www.wiz.io/blog/nvidia-ai-vulnerability-cve–2025–23266-nvidiascape Il s'agit d'une faille d'évasion de conteneur dans le NVIDIA Container Toolkit. La gravité est jugée critique avec un score CVSS de 9.0. Cette vulnérabilité permet à un conteneur malveillant d'obtenir un accès root complet sur l'hôte. L'origine du problème vient d'une mauvaise configuration des hooks OCI dans le toolkit. L'exploitation peut se faire très facilement, par exemple avec un Dockerfile de seulement trois lignes. Le risque principal concerne la compromission de l'isolation entre différents clients sur des infrastructures cloud GPU partagées. Les versions affectées incluent toutes les versions du NVIDIA Container Toolkit jusqu'à la 1.17.7 et du NVIDIA GPU Operator jusqu'à la version 25.3.1. Pour atténuer le risque, il est recommandé de mettre à jour vers les dernières versions corrigées. En attendant, il est possible de désactiver certains hooks problématiques dans la configuration pour limiter l'exposition. Cette faille met en lumière l'importance de renforcer la sécurité des environnements GPU partagés et la gestion des conteneurs AI. Fuite de données de l'application Tea : points essentiels https://knowyourmeme.com/memes/events/the-tea-app-data-leak Tea est une application lancée en 2023 qui permet aux femmes de laisser des avis anonymes sur des hommes rencontrés. En juillet 2025, une importante fuite a exposé environ 72 000 images sensibles (selfies, pièces d'identité) et plus d'1,1 million de messages privés. La fuite a été révélée après qu'un utilisateur ait partagé un lien pour télécharger la base de données compromise. Les données touchées concernaient majoritairement des utilisateurs inscrits avant février 2024, date à laquelle l'application a migré vers une infrastructure plus sécurisée. En réponse, Tea prévoit de proposer des services de protection d'identité aux utilisateurs impactés. Faille dans le paquet npm is : attaque en chaîne d'approvisionnement https://socket.dev/blog/npm-is-package-hijacked-in-expanding-supply-chain-attack Une campagne de phishing ciblant les mainteneurs npm a compromis plusieurs comptes, incluant celui du paquet is. Des versions compromises du paquet is (notamment les versions 3.3.1 et 5.0.0) contenaient un chargeur de malware JavaScript destiné aux systèmes Windows. Ce malware a offert aux attaquants un accès à distance via WebSocket, permettant potentiellement l'exécution de code arbitraire. L'attaque fait suite à d'autres compromissions de paquets populaires comme eslint-config-prettier, eslint-plugin-prettier, synckit, @pkgr/core, napi-postinstall, et got-fetch. Tous ces paquets ont été publiés sans aucun commit ou PR sur leurs dépôts GitHub respectifs, signalant un accès non autorisé aux tokens mainteneurs. Le domaine usurpé [npnjs.com](http://npnjs.com) a été utilisé pour collecter les jetons d'accès via des emails de phishing trompeurs. L'épisode met en lumière la fragilité des chaînes d'approvisionnement logicielle dans l'écosystème npm et la nécessité d'adopter des pratiques renforcées de sécurité autour des dépendances. Revues de sécurité automatisées avec Claude Code https://www.anthropic.com/news/automate-security-reviews-with-claude-code Anthropic a lancé des fonctionnalités de sécurité automatisées pour Claude Code, un assistant de codage d'IA en ligne de commande. Ces fonctionnalités ont été introduites en réponse au besoin croissant de maintenir la sécurité du code alors que les outils d'IA accélèrent considérablement le développement de logiciels. Commande /security-review : les développeurs peuvent exécuter cette commande dans leur terminal pour demander à Claude d'identifier les vulnérabilités de sécurité, notamment les risques d'injection SQL, les vulnérabilités de script intersite (XSS), les failles d'authentification et d'autorisation, ainsi que la gestion non sécurisée des données. Claude peut également suggérer et implémenter des correctifs. Intégration GitHub Actions : une nouvelle action GitHub permet à Claude Code d'analyser automatiquement chaque nouvelle demande d'extraction (pull request). L'outil examine les modifications de code pour y trouver des vulnérabilités, applique des règles personnalisables pour filtrer les faux positifs et commente directement la demande d'extraction avec les problèmes détectés et les correctifs recommandés. Ces fonctionnalités sont conçues pour créer un processus d'examen de sécurité cohérent et s'intégrer aux pipelines CI/CD existants, ce qui permet de s'assurer qu'aucun code n'atteint la production sans un examen de sécurité de base. Loi, société et organisation Google embauche les personnes clés de Windsurf https://www.blog-nouvelles-technologies.fr/333959/openai-windsurf-google-deepmind-codage-agentique/ windsurf devait être racheté par OpenAI Google ne fait pas d'offre de rachat mais débauche quelques personnes clés de Windsurf Windsurf reste donc indépendante mais sans certains cerveaux y compris son PDG. Les nouveaux dirigeants sont les ex leaders des force de vente Donc plus une boîte tech Pourquoi le deal a 3 milliard est tombé à l'eau ? On ne sait pas mais la divergence et l‘indépendance technologique est possiblement en cause. Les transfuge vont bosser chez Deepmind dans le code argentique Opinion Article: https://www.linkedin.com/pulse/dear-people-who-think-ai-low-skilled-code-monkeys-future-jan-moser-svade/ Jan Moser critique ceux qui pensent que l'IA et les développeurs peu qualifiés peuvent remplacer les ingénieurs logiciels compétents. Il cite l'exemple de l'application Tea, une plateforme de sécurité pour femmes, qui a exposé 72 000 images d'utilisateurs en raison d'une mauvaise configuration de Firebase et d'un manque de pratiques de développement sécurisées. Il souligne que l'absence de contrôles automatisés et de bonnes pratiques de sécurité a permis cette fuite de données. Moser avertit que des outils comme l'IA ne peuvent pas compenser l'absence de compétences en génie logiciel, notamment en matière de sécurité, de gestion des erreurs et de qualité du code. Il appelle à une reconnaissance de la valeur des ingénieurs logiciels qualifiés et à une approche plus rigoureuse dans le développement logiciel. YouTube déploie une technologie d'estimation d'âge pour identifier les adolescents aux États-Unis https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/ Sujet très à la mode, surtout au UK mais pas que… YouTube commence à déployer une technologie d'estimation d'âge basée sur l'IA pour identifier les utilisateurs adolescents aux États-Unis, indépendamment de l'âge déclaré lors de l'inscription. Cette technologie analyse divers signaux comportementaux, tels que l'historique de visionnage, les catégories de vidéos consultées et l'âge du compte. Lorsqu'un utilisateur est identifié comme adolescent, YouTube applique des protections supplémentaires, notamment : Désactivation des publicités personnalisées. Activation des outils de bien-être numérique, tels que les rappels de temps d'écran et de coucher. Limitation de la visualisation répétée de contenus sensibles, comme ceux liés à l'image corporelle. Si un utilisateur est incorrectement identifié comme mineur, il peut vérifier son âge via une pièce d'identité gouvernementale, une carte de crédit ou un selfie. Ce déploiement initial concerne un petit groupe d'utilisateurs aux États-Unis et sera étendu progressivement. Cette initiative s'inscrit dans les efforts de YouTube pour renforcer la sécurité des jeunes utilisateurs en ligne. Mistral AI : contribution à un standard environnemental pour l'IA https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Mistral AI a réalisé la première analyse de cycle de vie complète d'un modèle d'IA, en collaboration avec plusieurs partenaires. L'étude quantifie l'impact environnemental du modèle Mistral Large 2 sur les émissions de gaz à effet de serre, la consommation d'eau, et l'épuisement des ressources. La phase d'entraînement a généré 20,4 kilotonnes de CO₂ équivalent, consommé 281 000 m³ d'eau, et utilisé 660 kg SB-eq (mineral consumption). Pour une réponse de 400 tokens, l'impact marginal est faible mais non négligeable : 1,14 gramme de CO₂, 45 mL d'eau, et 0,16 mg d'équivalent antimoine. Mistral propose trois indicateurs pour évaluer cet impact : l'impact absolu de l'entraînement, l'impact marginal de l'inférence, et le ratio inference/impact total sur le cycle de vie. L'entreprise souligne l'importance de choisir le modèle en fonction du cas d'usage pour limiter l'empreinte environnementale. Mistral appelle à plus de transparence et à l'adoption de standards internationaux pour permettre une comparaison claire entre modèles. L'IA promettait plus d'efficacité… elle nous fait surtout travailler plus https://afterburnout.co/p/ai-promised-to-make-us-more-efficient Les outils d'IA devaient automatiser les tâches pénibles et libérer du temps pour les activités stratégiques et créatives. En réalité, le temps gagné est souvent aussitôt réinvesti dans d'autres tâches, créant une surcharge. Les utilisateurs croient être plus productifs avec l'IA, mais les données contredisent cette impression : une étude montre que les développeurs utilisant l'IA prennent 19 % de temps en plus pour accomplir leurs tâches. Le rapport DORA 2024 observe une baisse de performance globale des équipes lorsque l'usage de l'IA augmente : –1,5 % de throughput et –7,2 % de stabilité de livraison pour +25 % d'adoption de l'IA. L'IA ne réduit pas la charge mentale, elle la déplace : rédaction de prompts, vérification de résultats douteux, ajustements constants… Cela épuise et limite le temps de concentration réelle. Cette surcharge cognitive entraîne une forme de dette mentale : on ne gagne pas vraiment du temps, on le paie autrement. Le vrai problème vient de notre culture de la productivité, qui pousse à toujours vouloir optimiser, quitte à alimenter l'épuisement professionnel. Trois pistes concrètes : Repenser la productivité non en temps gagné, mais en énergie préservée. Être sélectif dans l'usage des outils IA, en fonction de son ressenti et non du battage médiatique. Accepter la courbe en J : l'IA peut être utile, mais nécessite des ajustements profonds pour produire des gains réels. Le vrai hack de productivité ? Parfois, ralentir pour rester lucide et durable. Conférences MCP Submit Europe https://mcpdevsummit.ai/ Retour de JavaOne en 2026 https://inside.java/2025/08/04/javaone-returns–2026/ JavaOne, la conférence dédiée à la communauté Java, fait son grand retour dans la Bay Area du 17 au 19 mars 2026. Après le succès de l'édition 2025, ce retour s'inscrit dans la continuité de la mission initiale de la conférence : rassembler la communauté pour apprendre, collaborer et innover. La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 25–27 août 2025 : SHAKA Biarritz - Biarritz (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 22–24 septembre 2025 : Kernel Recipes - Paris (France) 22–27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23–24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25–26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025–1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6–7 octobre 2025 : Swift Connection 2025 - Paris (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7–8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8–10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9–10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17–19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30–31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30–31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025–2 novembre 2025 : PyConFR 2025 - Lyon (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 5–6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15–16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19–21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1–2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4–5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9–11 décembre 2025 : APIdays Paris - Paris (France) 9–11 décembre 2025 : Green IO Paris - Paris (France) 10–11 décembre 2025 : Devops REX - Paris (France) 10–11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 28–31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2–6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12–13 février 2026 : Touraine Tech #26 - Tours (France) 22–24 avril 2026 : Devoxx France 2026 - Paris (France) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Topics covered in this episode: rumdl - A Markdown Linter written in Rust * Coverage 7.10.0: patch* * aioboto3* * You might not need a Python class* Extras Joke Watch on YouTube About the show Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: rumdl - A Markdown Linter written in Rust via Owen Lamont Supports toml file config settings Install via uv tool install rumdl. ⚡️ Built for speed with Rust - significantly faster than alternatives
Naše dnešní hosty spojuje zakázka na dodání interní datové platformy EasyLife. Podíváme se na tento use case pohledem dodavatele (MoroSystems) i zákazníka (Publicis Groupe). Tomáš Žlůva působí na pozici Executive Innovation & Investment Director v komunikační skupině Publicis Groupe Czech Republic. Publicis Groupe se specializuje na kreativu, média, data a technologie, poskytuje komplexní služby a klientské týmy na míru. Eduard Dvořák je Director of Business Development ve společnosti MoroSystems, která je dodavatelem technologických inovací a vyvíjí softwarová řešení na míru.
Pokud jste o víkendu pod dojmem dešťů z minulého týdne vyrazili na houby, možná jste našli méně hřibů, než jste čekali. Chce to čas, říká mykoložka Tereza Tejklová. Na procházku s ní na Královéhradecku vyrazil reportér Tomáš Lörincz. A nejprve se zaměřili na kraj lesa.
V Královéhradeckém kraji dozrávají v tomto týdnu pozdní odrůdy třešní. Sadaři, kteří se zaměřují právě na tento druh ovoce, mají opravdu naspěch - všechno je potřeba sklidit během pár dnů. V jednom ze sadů se byl podívat i reportér Tomáš Lörincz.
Provozovatelům kempů v Královéhradeckém kraji začalo nejlepší období roku - letní prázdniny. Rezervace se plní, zájem lidí roste a spolu s ním rostou i tržby. Jestli ještě máte šanci volná místa v kempech najít, to zjišťoval reportér Tomáš Lörincz.
Začaly rozsáhlé stavební práce na silničním průtahu Jaroměří na Náchodsku, na hlavním tahu z Hradce Králové směrem na Náchod a Polsko. Průjezd městem znamená zdržení desítky minut. Město čekají velké dopravní komplikace, kterým se ale řidiči mohou vyhnout. Detaily zjišťoval přímo v Jaroměři reportér Tomáš Lörincz.
Rybářům začíná dnes sezóna pro lov dravců. Až do konce roku mohou chytat například štiky, sumce, boleny nebo candáty - tedy ryby, které jsou od 1. ledna do 15. června hájené. Přímo u vody zjišťoval informace reportér Tomáš Lörincz.
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
Na včerejším jednání krajského zastupitelstva se očekával dopředu ohlášený protest nespokojených řidičů autobusů ze společnosti BusLine. Nakonec se ale jednání obešlo bez protestů. Podrobnosti a vysvětlení přidá reportér Tomáš Lörincz, který jednání sledoval.
Viaggio fra sapori e antichi villaggi nelle isole africane ancora non toccate dal turismo di massa, e le location più spettacolari delle prossime sfilate in giro per l'Italia. Nella versione Weekend di Start parliamo anche del progetto di Diriyah, che in Arabia Saudita guarda al futuro di cultura, shopping e ospitalità di lusso. Con due appuntamenti da non perdere per i prossimi giorni Learn more about your ad choices. Visit megaphone.fm/adchoices
Nový hospic, který se nyní staví ve Stěžerách u Hradce Králové, musí mít vlastní přístupovou cestu. Na jejím vybudování se ale oblastní charita, obec a Královéhradecký kraj zatím nedohodly. Více o tomto problému, který by mohl celou stavbu zkomplikovat a zdržet, zjišťoval reportér Tomáš Lörincz.
Brandon Liu is an open source developer and creator of the Protomaps basemap project. We talk about how static maps help developers build sites that last, the PMTiles file format, the role of OpenStreetMap, and his experience funding and running an open source project full time. Protomaps Protomaps PMTiles (File format used by Protomaps) Self-hosted slippy maps, for novices (like me) Why Deploy Protomaps on a CDN User examples Flickr Pinball Map Toilet Map Related projects OpenStreetMap (Dataset protomaps is based on) Mapzen (Former company that released details on what to display based on zoom levels) Mapbox GL JS (Mapbox developed source available map rendering library) MapLibre GL JS (Open source fork of Mapbox GL JS) Other links HTTP range requests (MDN) Hilbert curve Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: I'm talking to Brandon Liu. He's the creator of Protomaps, which is a way to easily create and host your own maps. Let's get into it. [00:00:09] Brandon: Hey, so thanks for having me on the podcast. So I'm Brandon. I work on an open source project called Protomaps. What it really is, is if you're a front end developer and you ever wanted to put maps on a website or on a mobile app, then Protomaps is sort of an open source solution for doing that that I hope is something that's way easier to use than, um, a lot of other open source projects. Why not just use Google Maps? [00:00:36] Jeremy: A lot of people are gonna be familiar with Google Maps. Why should they worry about whether something's open source? Why shouldn't they just go and use the Google maps API? [00:00:47] Brandon: So Google Maps is like an awesome thing it's an awesome product. Probably one of the best tech products ever right? And just to have a map that tells you what restaurants are open and something that I use like all the time especially like when you're traveling it has all that data. And the most amazing part is that it's free for consumers but it's not necessarily free for developers. Like if you wanted to embed that map onto your website or app, that usually has an API cost which still has a free tier and is affordable. But one motivation, one basic reason to use open source is if you have some project that doesn't really fit into that pricing model. You know like where you have to pay the cost of Google Maps, you have a side project, a nonprofit, that's one reason. But there's lots of other reasons related to flexibility or customization where you might want to use open source instead. Protomaps examples [00:01:49] Jeremy: Can you give some examples where people have used Protomaps and where that made sense for them? [00:01:56] Brandon: I follow a lot of the use cases and I also don't know about a lot of them because I don't have an API where I can track a hundred percent of the users. Some of them use the hosted version, but I would say most of them probably use it on their own infrastructure. One of the cool projects I've been seeing is called Toilet Map. And what toilet map is if you're in the UK and you want find a public restroom then it maps out, sort of crowdsourced all of the public restrooms. And that's important for like a lot of people if they have health issues, they need to find that information. And just a lot of different projects in the same vein. There's another one called Pinball Map which is sort of a hobby project to find all the pinball machines in the world. And they wanted to have a customized map that fit in with their theme of pinball. So these sorts of really cool indie projects are the ones I'm most excited about. Basemaps vs Overlays [00:02:57] Jeremy: And if we talk about, like the pinball map as an example, there's this concept of a basemap and then there's the things that you lay on top of it. What is a basemap and then is the pinball locations is that part of it or is that something separate? [00:03:12] Brandon: It's usually something separate. The example I usually use is if you go to a real estate site, like Zillow, you'll open up the map of Seattle and it has a bunch of pins showing all the houses, and then it has some information beneath it. That information beneath it is like labels telling, this neighborhood is Capitol Hill, or there is a park here. But all that information is common to a lot of use cases and it's not specific to real estate. So I think usually that's the distinction people use in the industry between like a base map versus your overlay. The overlay is like the data for your product or your company while the base map is something you could get from Google or from Protomaps or from Apple or from Mapbox that kind of thing. PMTiles for hosting the basemap and overlays [00:03:58] Jeremy: And so Protomaps in particular is responsible for the base map, and that information includes things like the streets and the locations of landmarks and things like that. Where is all that information coming from? [00:04:12] Brandon: So the base map information comes from a project called OpenStreetMap. And I would also, point out that for Protomaps as sort of an ecosystem. You can also put your overlay data into a format called PMTiles, which is sort of the core of what Protomaps is. So it can really do both. It can transform your data into the PMTiles format which you can host and you can also host the base map. So you kind of have both of those sides of the product in one solution. [00:04:43] Jeremy: And so when you say you have both are you saying that the PMTiles file can have, the base map in one file and then you would have the data you're laying on top in another file? Or what are you describing there? [00:04:57] Brandon: That's usually how I recommend to do it. Oftentimes there'll be sort of like, a really big basemap 'cause it has all of that data about like where the rivers are. Or while, if you want to put your map of toilets or park benches or pickleball courts on top, that's another file. But those are all just like assets you can move around like JSON or CSV files. Statically Hosted [00:05:19] Jeremy: And I think one of the things you mentioned was that your goal was to make Protomaps or the, the use of these PMTiles files easy to use. What does that look like for, for a developer? I wanna host a map. What do I actually need to, to put on my servers? [00:05:38] Brandon: So my usual pitch is that basically if you know how to use S3 or cloud storage, that you know how to deploy a map. And that, I think is the main sort of differentiation from most open source projects. Like a lot of them, they call themselves like, like some sort of self-hosted solution. But I've actually avoided using the term self-hosted because I think in most cases that implies a lot of complexity. Like you have to log into a Linux server or you have to use Kubernetes or some sort of Docker thing. What I really want to emphasize is the idea that, for Protomaps, it's self-hosted in the same way like CSS is self-hosted. So you don't really need a service from Amazon to host the JSON files or CSV files. It's really just a static file. [00:06:32] Jeremy: When you say static file that means you could use any static web host to host your HTML file, your JavaScript that actually renders the map. And then you have your PMTiles files, and you're not running a process or anything, you're just putting your files on a static file host. [00:06:50] Brandon: Right. So I think if you're a developer, you can also argue like a static file server is a server. It's you know, it's the cloud, it's just someone else's computer. It's really just nginx under the hood. But I think static storage is sort of special. If you look at things like static site generators, like Jekyll or Hugo, they're really popular because they're a commodity or like the storage is a commodity. And you can take your blog, make it a Jekyll blog, hosted on S3. One day, Amazon's like, we're charging three times as much so you can move it to a different cloud provider. And that's all vendor neutral. So I think that's really the special thing about static storage as a primitive on the web. Why running servers is a problem for resilience [00:07:36] Jeremy: Was there a prior experience you had? Like you've worked with maps for a very long time. Were there particular difficulties you had where you said I just gotta have something that can be statically hosted? [00:07:50] Brandon: That's sort of exactly why I got into this. I've been working sort of in and around the map space for over a decade, and Protomaps is really like me trying to solve the same problem I've had over and over again in the past, just like once and forever right? Because like once this problem is solved, like I don't need to deal with it again in the future. So I've worked at a couple of different companies before, mostly as a contractor, for like a humanitarian nonprofit for a design company doing things like, web applications to visualize climate change. Or for even like museums, like digital signage for museums. And oftentimes they had some sort of data visualization component, but always sort of the challenge of how to like, store and also distribute like that data was something that there wasn't really great open source solutions. So just for map data, that's really what motivated that design for Protomaps. [00:08:55] Jeremy: And in those, those projects in the past, were those things where you had to run your own server, run your own database, things like that? [00:09:04] Brandon: Yeah. And oftentimes we did, we would spin up an EC2 instance, for maybe one client and then we would have to host this server serving map data forever. Maybe the client goes away, or I guess it's good for business if you can sign some sort of like long-term support for that client saying, Hey, you know, like we're done with a project, but you can pay us to maintain the EC2 server for the next 10 years. And that's attractive. but it's also sort of a pain, because usually what happens is if people are given the choice, like a developer between like either I can manage the server on EC2 or on Rackspace or Hetzner or whatever, or I can go pay a SaaS to do it. In most cases, businesses will choose to pay the SaaS. So that's really like what creates a sort of lock-in is this preference for like, so I have this choice between like running the server or paying the SaaS. Like businesses will almost always go and pay the SaaS. [00:10:05] Jeremy: Yeah. And in this case, you either find some kind of free hosting or low-cost hosting just to host your files and you upload the files and then you're good from there. You don't need to maintain anything. [00:10:18] Brandon: Exactly, and that's really the ideal use case. so I have some users these, climate science consulting agencies, and then they might have like a one-off project where they have to generate the data once, but instead of having to maintain this server for the lifetime of that project, they just have a file on S3 and like, who cares? If that costs a couple dollars a month to run, that's fine, but it's not like S3 is gonna be deprecated, like it's gonna be on an insecure version of Ubuntu or something. So that's really the ideal, set of constraints for using Protomaps. [00:10:58] Jeremy: Yeah. Something this also makes me think about is, is like the resilience of sites like remaining online, because I, interviewed, Kyle Drake, he runs Neocities, which is like a modern version of GeoCities. And if I remember correctly, he was mentioning how a lot of old websites from that time, if they were running a server backend, like they were running PHP or something like that, if you were to try to go to those sites, now they're like pretty much all dead because there needed to be someone dedicated to running a Linux server, making sure things were patched and so on and so forth. But for static sites, like the ones that used to be hosted on GeoCities, you can go to the internet archive or other websites and they were just files, right? You can bring 'em right back up, and if anybody just puts 'em on a web server, then you're good. They're still alive. Case study of news room preferring static hosting [00:11:53] Brandon: Yeah, exactly. One place that's kind of surprising but makes sense where this comes up, is for newspapers actually. Some of the users using Protomaps are the Washington Post. And the reason they use it, is not necessarily because they don't want to pay for a SaaS like Google, but because if they make an interactive story, they have to guarantee that it still works in a couple of years. And that's like a policy decision from like the editorial board, which is like, so you can't write an article if people can't view it in five years. But if your like interactive data story is reliant on a third party, API and that third party API becomes deprecated, or it changes the pricing or it, you know, it gets acquired, then your journalism story is not gonna work anymore. So I have seen really good uptake among local news rooms and even big ones to use things like Protomaps just because it makes sense for the requirements. Working on Protomaps as an open source project for five years [00:12:49] Jeremy: How long have you been working on Protomaps and the parts that it's made up of such as PMTiles? [00:12:58] Brandon: I've been working on it for about five years, maybe a little more than that. It's sort of my pandemic era project. But the PMTiles part, which is really the heart of it only came in about halfway. Why not make a SaaS? [00:13:13] Brandon: So honestly, like when I first started it, I thought it was gonna be another SaaS and then I looked at it and looked at what the environment was around it. And I'm like, uh, so I don't really think I wanna do that. [00:13:24] Jeremy: When, when you say you looked at the environment around it what do you mean? Why did you decide not to make it a SaaS? [00:13:31] Brandon: Because there already is a lot of SaaS out there. And I think the opportunity of making something that is unique in terms of those use cases, like I mentioned like newsrooms, was clear. Like it was clear that there was some other solution, that could be built that would fit these needs better while if it was a SaaS, there are plenty of those out there. And I don't necessarily think that they're well differentiated. A lot of them all use OpenStreetMap data. And it seems like they mainly compete on price. It's like who can build the best three column pricing model. And then once you do that, you need to build like billing and metrics and authentication and like those problems don't really interest me. So I think, although I acknowledge sort of the indie hacker ethos now is to build a SaaS product with a monthly subscription, that's something I very much chose not to do, even though it is for sure like the best way to build a business. [00:14:29] Jeremy: Yeah, I mean, I think a lot of people can appreciate that perspective because it's, it's almost like we have SaaS overload, right? Where you have so many little bills for your project where you're like, another $5 a month, another $10 a month, or if you're a business, right? Those, you add a bunch of zeros and at some point it's just how many of these are we gonna stack on here? [00:14:53] Brandon: Yeah. And honestly. So I really think like as programmers, we're not really like great at choosing how to spend money like a $10 SaaS. That's like nothing. You know? So I can go to Starbucks and I can buy a pumpkin spice latte, and that's like $10 basically now, right? And it's like I'm able to make that consumer choice in like an instant just to spend money on that. But then if you're like, oh, like spend $10 on a SaaS that somebody put a lot of work into, then you're like, oh, that's too expensive. I could just do it myself. So I'm someone that also subscribes to a lot of SaaS products. and I think for a lot of things it's a great fit. Many open source SaaS projects are not easy to self host [00:15:37] Brandon: But there's always this tension between an open source project that you might be able to run yourself and a SaaS. And I think a lot of projects are at different parts of the spectrum. But for Protomaps, it's very much like I'm trying to move maps to being it is something that is so easy to run yourself that anyone can do it. [00:16:00] Jeremy: Yeah, and I think you can really see it with, there's a few SaaS projects that are successful and they're open source, but then you go to look at the self-hosting instructions and it's either really difficult to find and you find it, and then the instructions maybe don't work, or it's really complicated. So I think doing the opposite with Protomaps. As a user, I'm sure we're all appreciative, but I wonder in terms of trying to make money, if that's difficult. [00:16:30] Brandon: No, for sure. It is not like a good way to make money because I think like the ideal situation for an open source project that is open that wants to make money is the product itself is fundamentally complicated to where people are scared to run it themselves. Like a good example I can think of is like Supabase. Supabase is sort of like a platform as a service based on Postgres. And if you wanted to run it yourself, well you need to run Postgres and you need to handle backups and authentication and logging, and that stuff all needs to work and be production ready. So I think a lot of people, like they don't trust themselves to run database backups correctly. 'cause if you get it wrong once, then you're kind of screwed. So I think that fundamental aspect of the product, like a database is something that is very, very ripe for being a SaaS while still being open source because it's fundamentally hard to run. Another one I can think of is like tailscale, which is, like a VPN that works end to end. That's something where, you know, it has this networking complexity where a lot of developers don't wanna deal with that. So they'd happily pay, for tailscale as a service. There is a lot of products or open source projects that eventually end up just changing to becoming like a hosted service. Businesses going from open source to closed or restricted licenses [00:17:58] Brandon: But then in that situation why would they keep it open source, right? Like, if it's easy to run yourself well, doesn't that sort of cannibalize their business model? And I think that's really the tension overall in these open source companies. So you saw it happen to things like Elasticsearch to things like Terraform where they eventually change the license to one that makes it difficult for other companies to compete with them. [00:18:23] Jeremy: Yeah, I mean there's been a number of cases like that. I mean, specifically within the mapping community, one I can think of was Mapbox's. They have Mapbox gl. Which was a JavaScript client to visualize maps and they moved from, I forget which license they picked, but they moved to a much more restrictive license. I wonder what your thoughts are on something that releases as open source, but then becomes something maybe a little more muddy. [00:18:55] Brandon: Yeah, I think it totally makes sense because if you look at their business and their funding, it seems like for Mapbox, I haven't used it in a while, but my understanding is like a lot of their business now is car companies and doing in dash navigation. And that is probably way better of a business than trying to serve like people making maps of toilets. And I think sort of the beauty of it is that, so Mapbox, the story is they had a JavaScript renderer called Mapbox GL JS. And they changed that to a source available license a couple years ago. And there's a fork of it that I'm sort of involved in called MapLibre GL. But I think the cool part is Mapbox paid employees for years, probably millions of dollars in total to work on this thing and just gave it away for free. Right? So everyone can benefit from that work they did. It's not like that code went away, like once they changed the license. Well, the old version has been forked. It's going its own way now. It's quite different than the new version of Mapbox, but I think it's extremely generous that they're able to pay people for years, you know, like a competitive salary and just give that away. [00:20:10] Jeremy: Yeah, so we should maybe look at it as, it was a gift while it was open source, and they've given it to the community and they're on continuing on their own path, but at least the community running Map Libre, they can run with it, right? It's not like it just disappeared. [00:20:29] Brandon: Yeah, exactly. And that is something that I use for Protomaps quite extensively. Like it's the primary way of showing maps on the web and I've been trying to like work on some enhancements to it to have like better internationalization for if you are in like South Asia like not show languages correctly. So I think it is being taken in a new direction. And I think like sort of the combination of Protomaps and MapLibre, it addresses a lot of use cases, like I mentioned earlier with like these like hobby projects, indie projects that are almost certainly not interesting to someone like Mapbox or Google as a business. But I'm happy to support as a small business myself. Financially supporting open source work (GitHub sponsors, closed source, contracts) [00:21:12] Jeremy: In my previous interview with Tom, one of the main things he mentioned was that creating a mapping business is incredibly difficult, and he said he probably wouldn't do it again. So in your case, you're building Protomaps, which you've admitted is easy to self-host. So there's not a whole lot of incentive for people to pay you. How is that working out for you? How are you supporting yourself? [00:21:40] Brandon: There's a couple of strategies that I've tried and oftentimes failed at. Just to go down the list, so I do have GitHub sponsors so I do have a hosted version of Protomaps you can use if you don't want to bother copying a big file around. But the way I do the billing for that is through GitHub sponsors. If you wanted to use this thing I provide, then just be a sponsor. And that definitely pays for itself, like the cost of running it. And that's great. GitHub sponsors is so easy to set up. It just removes you having to deal with Stripe or something. 'cause a lot of people, their credit card information is already in GitHub. GitHub sponsors I think is awesome if you want to like cover costs for a project. But I think very few people are able to make that work. A thing that's like a salary job level. It's sort of like Twitch streaming, you know, there's a handful of people that are full-time streamers and then you look down the list on Twitch and it's like a lot of people that have like 10 viewers. But some of the other things I've tried, I actually started out, publishing the base map as a closed source thing, where I would sell sort of like a data package instead of being a SaaS, I'd be like, here's a one-time download, of the premium data and you can buy it. And quite a few people bought it I just priced it at like $500 for this thing. And I thought that was an interesting experiment. The main reason it's interesting is because the people that it attracts to you in terms of like, they're curious about your products, are all people willing to pay money. While if you start out everything being open source, then the people that are gonna be try to do it are only the people that want to get something for free. So what I discovered is actually like once you transition that thing from closed source to open source, a lot of the people that used to pay you money will still keep paying you money because like, it wasn't necessarily that that closed source thing was why they wanted to pay. They just valued that thought you've put into it your expertise, for example. So I think that is one thing, that I tried at the beginning was just start out, closed source proprietary, then make it open source. That's interesting to people. Like if you release something as open source, if you go the other way, like people are really mad if you start out with something open source and then later on you're like, oh, it's some other license. Then people are like that's so rotten. But I think doing it the other way, I think is quite valuable in terms of being able to find an audience. [00:24:29] Jeremy: And when you said it was closed source and paid to open source, do you still sell those map exports? [00:24:39] Brandon: I don't right now. It's something that I might do in the future, you know, like have small customizations of the data that are available, uh, for a fee. still like the core OpenStreetMap based map that's like a hundred gigs you can just download. And that'll always just be like a free download just because that's already out there. All the source code to build it is open source. So even if I said, oh, you have to pay for it, then someone else can just do it right? So there's no real reason like to make that like some sort of like paywall thing. But I think like overall if the project is gonna survive in the long term it's important that I'd ideally like to be able to like grow like a team like have a small group of people that can dedicate the time to growing the project in the long term. But I'm still like trying to figure that out right now. [00:25:34] Jeremy: And when you mentioned that when you went from closed to open and people were still paying you, you don't sell a product anymore. What were they paying for? [00:25:45] Brandon: So I have some contracts with companies basically, like if they need a feature or they need a customization in this way then I am very open to those. And I sort of set it up to make it clear from the beginning that this is not just a free thing on GitHub, this is something that you could pay for if you need help with it, if you need support, if you wanted it. I'm also a little cagey about the word support because I think like it sounds a little bit too wishy-washy. Pretty much like if you need access to the developers of an open source project, I think that's something that businesses are willing to pay for. And I think like making that clear to potential users is a challenge. But I think that is one way that you might be able to make like a living out of open source. [00:26:35] Jeremy: And I think you said you'd been working on it for about five years. Has that mostly been full time? [00:26:42] Brandon: It's been on and off. it's sort of my pandemic era project. But I've spent a lot of time, most of my time working on the open source project at this point. So I have done some things that were more just like I'm doing a customization or like a private deployment for some client. But that's been a minority of the time. Yeah. [00:27:03] Jeremy: It's still impressive to have an open source project that is easy to self-host and yet is still able to support you working on it full time. I think a lot of people might make the assumption that there's nothing to sell if something is, is easy to use. But this sort of sounds like a counterpoint to that. [00:27:25] Brandon: I think I'd like it to be. So when you come back to the point of like, it being easy to self-host. Well, so again, like I think about it as like a primitive of the web. Like for example, if you wanted to start a business today as like hosted CSS files, you know, like where you upload your CSS and then you get developers to pay you a monthly subscription for how many times they fetched a CSS file. Well, I think most developers would be like, that's stupid because it's just an open specification, you just upload a static file. And really my goal is to make Protomaps the same way where it's obvious that there's not really some sort of lock-in or some sort of secret sauce in the server that does this thing. How PMTiles works and building a primitive of the web [00:28:16] Brandon: If you look at video for example, like a lot of the tech for how Protomaps and PMTiles works is based on parts of the HTTP spec that were made for video. And 20 years ago, if you wanted to host a video on the web, you had to have like a real player license or flash. So you had to go license some server software from real media or from macromedia so you could stream video to a browser plugin. But now in HTML you can just embed a video file. And no one's like, oh well I need to go pay for my video serving license. I mean, there is such a thing, like YouTube doesn't really use that for DRM reasons, but people just have the assumption that video is like a primitive on the web. So if we're able to make maps sort of that same way like a primitive on the web then there isn't really some obvious business or licensing model behind how that works. Just because it's a thing and it helps a lot of people do their jobs and people are happy using it. So why bother? [00:29:26] Jeremy: You mentioned that it a tech that was used for streaming video. What tech specifically is it? [00:29:34] Brandon: So it is byte range serving. So when you open a video file on the web, So let's say it's like a 100 megabyte video. You don't have to download the entire video before it starts playing. It streams parts out of the file based on like what frames... I mean, it's based on the frames in the video. So it can start streaming immediately because it's organized in a way to where the first few frames are at the beginning. And what PMTiles really is, is it's just like a video but in space instead of time. So it's organized in a way where these zoomed out views are at the beginning and the most zoomed in views are at the end. So when you're like panning or zooming in the map all you're really doing is fetching byte ranges out of that file the same way as a video. But it's organized in, this tiled way on a space filling curve. IIt's a little bit complicated how it works internally and I think it's kind of cool but that's sort of an like an implementation detail. [00:30:35] Jeremy: And to the person deploying it, it just looks like a single file. [00:30:40] Brandon: Exactly in the same way like an mp3 audio file is or like a JSON file is. [00:30:47] Jeremy: So with a video, I can sort of see how as someone seeks through the video, they start at the beginning and then they go to the middle if they wanna see the middle. For a map, as somebody scrolls around the map, are you seeking all over the file or is the way it's structured have a little less chaos? [00:31:09] Brandon: It's structured. And that's kind of the main technical challenge behind building PMTiles is you have to be sort of clever so you're not spraying the reads everywhere. So it uses something called a hilbert curve, which is a mathematical concept of a space filling curve. Where it's one continuous curve that essentially lets you break 2D space into 1D space. So if you've seen some maps of IP space, it uses this crazy looking curve that hits all the points in one continuous line. And that's the same concept behind PMTiles is if you're looking at one part of the world, you're sort of guaranteed that all of those parts you're looking at are quite close to each other and the data you have to transfer is quite minimal, compared to if you just had it at random. [00:32:02] Jeremy: How big do the files get? If I have a PMTiles of the entire world, what kind of size am I looking at? [00:32:10] Brandon: Right now, the default one I distribute is 128 gigabytes, so it's quite sizable, although you can slice parts out of it remotely. So if you just wanted. if you just wanted California or just wanted LA or just wanted only a couple of zoom levels, like from zero to 10 instead of zero to 15, there is a command line tool that's also called PMTiles that lets you do that. Issues with CDNs and range queries [00:32:35] Jeremy: And when you're working with files of this size, I mean, let's say I am working with a CDN in front of my application. I'm not typically accustomed to hosting something that's that large and something that's where you're seeking all over the file. is that, ever an issue or is that something that's just taken care of by the browser and, and taken care of by, by the hosts? [00:32:58] Brandon: That is an issue actually, so a lot of CDNs don't deal with it correctly. And my recommendation is there is a kind of proxy server or like a serverless proxy thing that I wrote. That runs on like cloudflare workers or on Docker that lets you proxy those range requests into a normal URL and then that is like a hundred percent CDN compatible. So I would say like a lot of the big commercial installations of this thing, they use that because it makes more practical sense. It's also faster. But the idea is that this solution sort of scales up and scales down. If you wanted to host just your city in like a 10 megabyte file, well you can just put that into GitHub pages and you don't have to worry about it. If you want to have a global map for your website that serves a ton of traffic then you probably want a little bit more sophisticated of a solution. It still does not require you to run a Linux server, but it might require (you) to use like Lambda or Lambda in conjunction with like a CDN. [00:34:09] Jeremy: Yeah. And that sort of ties into what you were saying at the beginning where if you can host on something like CloudFlare Workers or Lambda, there's less time you have to spend keeping these things running. [00:34:26] Brandon: Yeah, exactly. and I think also the Lambda or CloudFlare workers solution is not perfect. It's not as perfect as S3 or as just static files, but in my experience, it still is better at building something that lasts on the time span of years than being like I have a server that is on this Ubuntu version and in four years there's all these like security patches that are not being applied. So it's still sort of serverless, although not totally vendor neutral like S3. Customizing the map [00:35:03] Jeremy: We've mostly been talking about how you host the map itself, but for someone who's not familiar with these kind of tools, how would they be customizing the map? [00:35:15] Brandon: For customizing the map there is front end style customization and there's also data customization. So for the front end if you wanted to change the water from the shade of blue to another shade of blue there is a TypeScript API where you can customize it almost like a text editor color scheme. So if you're able to name a bunch of colors, well you can customize the map in that way you can change the fonts. And that's all done using MapLibre GL using a TypeScript API on top of that for customizing the data. So all the pipeline to generate this data from OpenStreetMap is open source. There is a Java program using a library called PlanetTiler which is awesome, which is this super fast multi-core way of building map tiles. And right now there isn't really great hooks to customize what data goes into that. But that's something that I do wanna work on. And finally, because the data comes from OpenStreetMap if you notice data that's missing or you wanted to correct data in OSM then you can go into osm.org. You can get involved in contributing the data to OSM and the Protomaps build is daily. So if you make a change, then within 24 hours you should see the new base map. Have that change. And of course for OSM your improvements would go into every OSM based project that is ingesting that data. So it's not a protomap specific thing. It's like this big shared data source, almost like Wikipedia. OpenStreetMap is a dataset and not a map [00:37:01] Jeremy: I think you were involved with OpenStreetMap to some extent. Can you speak a little bit to that for people who aren't familiar, what OpenStreetMap is? [00:37:11] Brandon: Right. So I've been using OSM as sort of like a tools developer for over a decade now. And one of the number one questions I get from developers about what is Protomaps is why wouldn't I just use OpenStreetMap? What's the distinction between Protomaps and OpenStreetMap? And it's sort of like this funny thing because even though OSM has map in the name it's not really a map in that you can't... In that it's mostly a data set and not a map. It does have a map that you can see that you can pan around to when you go to the website but the way that thing they show you on the website is built is not really that easily reproducible. It involves a lot of c++ software you have to run. But OpenStreetMap itself, the heart of it is almost like a big XML file that has all the data in the map and global. And it has tagged features for example. So you can go in and edit that. It has a web front end to change the data. It does not directly translate into making a map actually. Protomaps decides what shows at each zoom level [00:38:24] Brandon: So a lot of the pipeline, that Java program I mentioned for building this basemap for protomaps is doing things like you have to choose what data you show when you zoom out. You can't show all the data. For example when you're zoomed out and you're looking at all of a state like Colorado you don't see all the Chipotle when you're zoomed all the way out. That'd be weird, right? So you have to make some sort of decision in logic that says this data only shows up at this zoom level. And that's really what is the challenge in optimizing the size of that for the Protomaps map project. [00:39:03] Jeremy: Oh, so those decisions of what to show at different Zoom levels those are decisions made by you when you're creating the PMTiles file with Protomaps. [00:39:14] Brandon: Exactly. It's part of the base maps build pipeline. and those are honestly very subjective decisions. Who really decides when you're zoomed out should this hospital show up or should this museum show up nowadays in Google, I think it shows you ads. Like if someone pays for their car repair shop to show up when you're zoomed out like that that gets surfaced. But because there is no advertising auction in Protomaps that doesn't happen obviously. So we have to sort of make some reasonable choice. A lot of that right now in Protomaps actually comes from another open source project called Mapzen. So Mapzen was a company that went outta business a couple years ago. They did a lot of this work in designing which data shows up at which Zoom level and open sourced it. And then when they shut down, they transferred that code into the Linux Foundation. So it's this totally open source project, that like, again, sort of like Mapbox gl has this awesome legacy in that this company funded it for years for smart people to work on it and now it's just like a free thing you can use. So the logic in Protomaps is really based on mapzen. [00:40:33] Jeremy: And so the visualization of all this... I think I understand what you mean when people say oh, why not use OpenStreetMaps because it's not really clear it's hard to tell is this the tool that's visualizing the data? Is it the data itself? So in the case of using Protomaps, it sounds like Protomaps itself has all of the data from OpenStreetMap and then it has made all the decisions for you in terms of what to show at different Zoom levels and what things to have on the map at all. And then finally, you have to have a separate, UI layer and in this case, it sounds like the one that you recommend is the Map Libre library. [00:41:18] Brandon: Yeah, that's exactly right. For Protomaps, it has a portion or a subset of OSM data. It doesn't have all of it just because there's too much, like there's data in there. people have mapped out different bushes and I don't include that in Protomaps if you wanted to go in and edit like the Java code to add that you can. But really what Protomaps is positioned at is sort of a solution for developers that want to use OSM data to make a map on their app or their website. because OpenStreetMap itself is mostly a data set, it does not really go all the way to having an end-to-end solution. Financials and the idea of a project being complete [00:41:59] Jeremy: So I think it's great that somebody who wants to make a map, they have these tools available, whether it's from what was originally built by Mapbox, what's built by Open StreetMap now, the work you're doing with Protomaps. But I wonder one of the things that I talked about with Tom was he was saying he was trying to build this mapping business and based on the financials of what was coming in he was stressed, right? He was struggling a bit. And I wonder for you, you've been working on this open source project for five years. Do you have similar stressors or do you feel like I could keep going how things are now and I feel comfortable? [00:42:46] Brandon: So I wouldn't say I'm a hundred percent in one bucket or the other. I'm still seeing it play out. One thing, that I really respect in a lot of open source projects, which I'm not saying I'm gonna do for Protomaps is the idea that a project is like finished. I think that is amazing. If a software project can just be done it's sort of like a painting or a novel once you write, finish the last page, have it seen by the editor. I send it off to the press is you're done with a book. And I think one of the pains of software is so few of us can actually do that. And I don't know obviously people will say oh the map is never finished. That's more true of OSM, but I think like for Protomaps. One thing I'm thinking about is how to limit the scope to something that's quite narrow to where we could be feature complete on the core things in the near term timeframe. That means that it does not address a lot of things that people want. Like search, like if you go to Google Maps and you search for a restaurant, you will get some hits. that's like a geocoding issue. And I've already decided that's totally outta scope for Protomaps. So, in terms of trying to think about the future of this, I'm mostly looking for ways to cut scope if possible. There are some things like better tooling around being able to work with PMTiles that are on the roadmap. but for me, I am still enjoying working on the project. It's definitely growing. So I can see on NPM downloads I can see the growth curve of people using it and that's really cool. So I like hearing about when people are using it for cool projects. So it seems to still be going okay for now. [00:44:44] Jeremy: Yeah, that's an interesting perspective about how you were talking about projects being done. Because I think when people look at GitHub projects and they go like, oh, the last commit was X months ago. They go oh well this is dead right? But maybe that's the wrong framing. Maybe you can get a project to a point where it's like, oh, it's because it doesn't need to be updated. [00:45:07] Brandon: Exactly, yeah. Like I used to do a lot of c++ programming and the best part is when you see some LAPACK matrix math library from like 1995 that still works perfectly in c++ and you're like, this is awesome. This is the one I have to use. But if you're like trying to use some like React component library and it hasn't been updated in like a year, you're like, oh, that's a problem. So again, I think there's some middle ground between those that I'm trying to find. I do like for Protomaps, it's quite dependency light in terms of the number of hard dependencies I have in software. but I do still feel like there is a lot of work to be done in terms of project scope that needs to have stuff added. You mostly only hear about problems instead of people's wins [00:45:54] Jeremy: Having run it for this long. Do you have any thoughts on running an open source project in general? On dealing with issues or managing what to work on things like that? [00:46:07] Brandon: Yeah. So I have a lot. I think one thing people point out a lot is that especially because I don't have a direct relationship with a lot of the people using it a lot of times I don't even know that they're using it. Someone sent me a message saying hey, have you seen flickr.com, like the photo site? And I'm like, no. And I went to flickr.com/map and it has Protomaps for it. And I'm like, I had no idea. But that's cool, if they're able to use Protomaps for this giant photo sharing site that's awesome. But that also means I don't really hear about when people use it successfully because you just don't know, I guess they, NPM installed it and it works perfectly and you never hear about it. You only hear about people's negative experiences. You only hear about people that come and open GitHub issues saying this is totally broken, and why doesn't this thing exist? And I'm like, well, it's because there's an infinite amount of things that I want to do, but I have a finite amount of time and I just haven't gone into that yet. And that's honestly a lot of the things and people are like when is this thing gonna be done? So that's, that's honestly part of why I don't have a public roadmap because I want to avoid that sort of bickering about it. I would say that's one of my biggest frustrations with running an open source project is how it's self-selected to only hear the negative experiences with it. Be careful what PRs you accept [00:47:32] Brandon: 'cause you don't hear about those times where it works. I'd say another thing is it's changed my perspective on contributing to open source because I think when I was younger or before I had become a maintainer I would open a pull request on a project unprompted that has a hundred lines and I'd be like, Hey, just merge this thing. But I didn't realize when I was younger well if I just merge it and I disappear, then the maintainer is stuck with what I did forever. You know if I add some feature then that person that maintains the project has to do that indefinitely. And I think that's very asymmetrical and it's changed my perspective a lot on accepting open source contributions. I wanna have it be open to anyone to contribute. But there is some amount of back and forth where it's almost like the default answer for should I accept a PR is no by default because you're the one maintaining it. And do you understand the shape of that solution completely to where you're going to support it for years because the person that's contributing it is not bound to those same obligations that you are. And I think that's also one of the things where I have a lot of trepidation around open source is I used to think of it as a lot more bazaar-like in terms of anyone can just throw their thing in. But then that creates a lot of problems for the people who are expected out of social obligation to continue this thing indefinitely. [00:49:23] Jeremy: Yeah, I can totally see why that causes burnout with a lot of open source maintainers, because you probably to some extent maybe even feel some guilt right? You're like, well, somebody took the time to make this. But then like you said you have to spend a lot of time trying to figure out is this something I wanna maintain long term? And one wrong move and it's like, well, it's in here now. [00:49:53] Brandon: Exactly. To me, I think that is a very common failure mode for open source projects is they're too liberal in the things they accept. And that's a lot of why I was talking about how that choice of what features show up on the map was inherited from the MapZen projects. If I didn't have that then somebody could come in and say hey, you know, I want to show power lines on the map. And they open a PR for power lines and now everybody who's using Protomaps when they're like zoomed out they see power lines are like I didn't want that. So I think that's part of why a lot of open source projects eventually evolve into a plugin system is because there is this demand as the project grows for more and more features. But there is a limit in the maintainers. It's like the demand for features is exponential while the maintainer amount of time and effort is linear. Plugin systems might reduce need for PRs [00:50:56] Brandon: So maybe the solution to smash that exponential down to quadratic maybe is to add a plugin system. But I think that is one of the biggest tensions that only became obvious to me after working on this for a couple of years. [00:51:14] Jeremy: Is that something you're considering doing now? [00:51:18] Brandon: Is the plugin system? Yeah. I think for the data customization, I eventually wanted to have some sort of programmatic API to where you could declare a config file that says I want ski routes. It totally makes sense. The power lines example is maybe a little bit obscure but for example like a skiing app and you want to be able to show ski slopes when you're zoomed out well you're not gonna be able to get that from Mapbox or from Google because they have a one size fits all map that's not specialized to skiing or to golfing or to outdoors. But if you like, in theory, you could do this with Protomaps if you changed the Java code to show data at different zoom levels. And that is to me what makes the most sense for a plugin system and also makes the most product sense because it enables a lot of things you cannot do with the one size fits all map. [00:52:20] Jeremy: It might also increase the complexity of the implementation though, right? [00:52:25] Brandon: Yeah, exactly. So that's like. That's really where a lot of the terrifying thoughts come in, which is like once you create this like config file surface area, well what does that look like? Is that JSON? Is that TOML, is that some weird like everything eventually evolves into some scripting language right? Where you have logic inside of your templates and I honestly do not really know what that looks like right now. That feels like something in the medium term roadmap. [00:52:58] Jeremy: Yeah and then in terms of bug reports or issues, now it's not just your code it's this exponential combination of whatever people put into these config files. [00:53:09] Brandon: Exactly. Yeah. so again, like I really respect the projects that have done this well or that have done plugins well. I'm trying to think of some, I think obsidian has plugins, for example. And that seems to be one of the few solutions to try and satisfy the infinite desire for features with the limited amount of maintainer time. Time split between code vs triage vs talking to users [00:53:36] Jeremy: How would you say your time is split between working on the code versus issue and PR triage? [00:53:43] Brandon: Oh, it varies really. I think working on the code is like a minority of it. I think something that I actually enjoy is talking to people, talking to users, getting feedback on it. I go to quite a few conferences to talk to developers or people that are interested and figure out how to refine the message, how to make it clearer to people, like what this is for. And I would say maybe a plurality of my time is spent dealing with non-technical things that are neither code or GitHub issues. One thing I've been trying to do recently is talk to people that are not really in the mapping space. For example, people that work for newspapers like a lot of them are front end developers and if you ask them to run a Linux server they're like I have no idea. But that really is like one of the best target audiences for Protomaps. So I'd say a lot of the reality of running an open source project is a lot like a business is it has all the same challenges as a business in terms of you have to figure out what is the thing you're offering. You have to deal with people using it. You have to deal with feedback, you have to deal with managing emails and stuff. I don't think the payoff is anywhere near running a business or a startup that's backed by VC money is but it's definitely not the case that if you just want to code, you should start an open source project because I think a lot of the work for an opensource project has nothing to do with just writing the code. It is in my opinion as someone having done a VC backed business before, it is a lot more similar to running, a tech company than just putting some code on GitHub. Running a startup vs open source project [00:55:43] Jeremy: Well, since you've done both at a high level what did you like about running the company versus maintaining the open source project? [00:55:52] Brandon: So I have done some venture capital accelerator programs before and I think there is an element of hype and energy that you get from that that is self perpetuating. Your co-founder is gungho on like, yeah, we're gonna do this thing. And your investors are like, you guys are geniuses. You guys are gonna make a killing doing this thing. And the way it's framed is sort of obvious to everyone that it's like there's a much more traditional set of motivations behind that, that people understand while it's definitely not the case for running an open source project. Sometimes you just wake up and you're like what the hell is this thing for, it is this thing you spend a lot of time on. You don't even know who's using it. The people that use it and make a bunch of money off of it they know nothing about it. And you know, it's just like cool. And then you only hear from people that are complaining about it. And I think like that's honestly discouraging compared to the more clear energy and clearer motivation and vision behind how most people think about a company. But what I like about the open source project is just the lack of those constraints you know? Where you have a mandate that you need to have this many customers that are paying by this amount of time. There's that sort of pressure on delivering a business result instead of just making something that you're proud of that's simple to use and has like an elegant design. I think that's really a difference in motivation as well. Having control [00:57:50] Jeremy: Do you feel like you have more control? Like you mentioned how you've decided I'm not gonna make a public roadmap. I'm the sole developer. I get to decide what goes in. What doesn't. Do you feel like you have more control in your current position than you did running the startup? [00:58:10] Brandon: Definitely for sure. Like that agency is what I value the most. It is possible to go too far. Like, so I'm very wary of the BDFL title, which I think is how a lot of open source projects succeed. But I think there is some element of for a project to succeed there has to be somebody that makes those decisions. Sometimes those decisions will be wrong and then hopefully they can be rectified. But I think going back to what I was talking about with scope, I think the overall vision and the scope of the project is something that I am very opinionated about in that it should do these things. It shouldn't do these things. It should be easy to use for this audience. Is it gonna be appealing to this other audience? I don't know. And I think that is really one of the most important parts of that leadership role, is having the power to decide we're doing this, we're not doing this. I would hope other developers would be able to get on board if they're able to make good use of the project, if they use it for their company, if they use it for their business, if they just think the project is cool. So there are other contributors at this point and I want to get more involved. But I think being able to make those decisions to what I believe is going to be the best project is something that is very special about open source, that isn't necessarily true about running like a SaaS business. [00:59:50] Jeremy: I think that's a good spot to end it on, so if people want to learn more about Protomaps or they wanna see what you're up to, where should they head? [01:00:00] Brandon: So you can go to Protomaps.com, GitHub, or you can find me or Protomaps on bluesky or Mastodon. [01:00:09] Jeremy: All right, Brandon, thank you so much for chatting today. [01:00:12] Brandon: Great. Thank you very much.
Domovy pro seniory v Královéhradeckém kraji využily možnost zdražit své služby o státem stanovenou hranici - od března si mohou za stravu a pobyt účtovat o 15 korun denně více. Z oslovených domovů pro seniory nebyl žádný, který by se této možnosti vzdal. Detaily zjišťoval reportér Tomáš Lörincz.
How can you simplify the management of your Python projects with one file? What are the advantages of using LazyFrames in Polars? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
Pálení listí, trávy a dalšího bioodpadu je zákonem zakázáno. Novinka, se kterou se zatím seznamují nejen zahrádkáři, ale třeba i obce nebo hasiči, poměrně zásadně mění informace, které jsme vám přinesli před pár dny. Jak to tedy vypadá teď, to zjišťoval reportér Tomáš Lörincz.
Část včelstev v Česku je znovu ohrožena varroázou, kritický stav už hlásí včelaři ze severu Moravy. V Královéhradeckém kraji se zatím žádná větší ohniska neobjevila, ale to ještě může přijít. Varroáza včely likviduje po celou zimu. Podrobnosti k tomuto tématu zjišťoval reportér Tomáš Lörincz.
Domácnosti v Královéhradeckém kraji zaplatí letos za vodu víc - ale ne všude. Některé vodárenské společnosti budou účtovat stejné ceny jako loni. Roční náklady průměrné domácnosti na pitnou vodu se pohybují od 10 do 15 tisíc korun. Přehled novinek připravil reportér Tomáš Lörincz.
Ředitelství silnic a dálnic bude i v roce 2025 pracovat na přípravách a výstavbě dalších dálničních úseků nebo obchvatů měst. Zajímavé a pro řidiče snad příjemné zprávy se týkají i Královéhradeckého kraje, jak zjistil reportér Tomáš Lörincz.
Deepdive zum Thema Leberkrebs mit Prof. Tom Lüdde, Chefarzt der Klinik für Gastroenterologie, Hepatologie und Infektiologie am Universitätsklinikum Düsseldorf. Warum kann eine Leberverfettung zu Krebs führen? Wie wirkt Kaffee auf die Leber? Wie kann man Leberkrebs frühzeitig erkennen? Was kann man präventiv tun, um das Leberkrebsrisiko zu senken? All das und vieles mehr erfährst du in dieser Episode! 00:00:00 Intro 00:00:40 Leberkrebs Häufigkeit 00:02:00 Unterschiedliche Krebsarten in der Leber 00:03:30 Gutartige vs bösartige Lebertumore 00:04:30 Wie tödlich ist Leberkrebs? 00:06:00 Frühes Diagnostizieren ist lebensrettend! 00:07:00 Ursachen & Risikofaktoren 00:08:00 Alkohol 00:08:30 Leberverfettung 00:10:30 Fettleber: Untersuchungsmethoden 00:13:45 Tipps gegen eine Leberverfettung 00:15:20 Kaffee & Koffein 00:18:15 Hepatitis B & C 00:21:00 Screening 00:23:30 Aflatoxine in Lebensmitteln 00:28:00 AFP: Blutmarker für Leberkrebs 00:31:00 Symptome von Lebererkrankungen/krebs 00:35:00 Therapiemöglichkeiten 00:37:30 Aktuelle Forschung und Blick in die Zukunft 00:40:00 Abschließende Worte Instagram: https://www.instagram.com/med.alessandro/ Youtube: https://www.youtube.com/channel/UCSusVamtMAp5WTumwU-dRiw Erwähnte Studien & Links: BfR Stellungnahme zu Mykotoxinen in Pflanzendrinks: https://www.bfr.bund.de/cm/343/mykotoxine-in-pflanzendrinks-mehr-daten-erforderlich.pdf
Už v minulém roce dospěly ke stejnému rozhodnutí i řetězce Billa a Lidl. Přesto to vypadá, že si zájemci kapry na obvyklých místech budou většinou moci pořídit. Detaily zjišťoval reportér Tomáš Lörincz.
Obyvatelé Hradce Králové budou mít v příštím roce o sedm procent dražší dálkově dodávané teplo. Společnost Tepelné hospodářství Hradec Králové, která zajišťuje zásobování 24 tisíc domácností v krajském městě a okolí, se na ceně dohodla s Opatovickou elektrárnou. Detaily zjišťoval reportér Tomáš Lörincz.
Dans cet épisode, j'ai eu le plaisir d'échanger avec Tom, sage-femme néerlandophone et l'un des rares hommes dans cette profession en Belgique. Depuis plus de 20 ans, Tom accompagne les femmes dans leurs accouchements, d'abord à domicile, puis au sein de la maison de naissance La Madrugada, située dans la région d'Anvers.Nous avons exploré son parcours unique, inspiré par une expérience personnelle marquante lors de l'accouchement de sa sœur, et sa vision de la naissance comme un événement profondément humain et artisanal. Tom partage les défis qu'il a rencontrés en tant qu'homme dans une profession majoritairement féminine, mais aussi la richesse des liens qu'il a créés avec ses collègues et les familles qu'il accompagne.Nous avons également parlé de sa maison de naissance, La Madrugada, un espace qu'il a cofondé pour offrir une alternative sécurisée et respectueuse pour les parents qui ne souhaitent pas accoucher ni à domicile ni en hôpital. Tom nous raconte comment ce lieu, symboliquement nommé "le commencement du jour", est né du besoin d'allier autonomie, sécurité, et humanité dans les naissances.Enfin, nous avons discuté de l'importance de groupes comme Birth Matters pour renforcer le réseau des sages-femmes autonomes et valoriser leur travail souvent invisibilisé, que ce soit dans les statistiques ou dans les décisions politiques.
Message from Tom L on August 18, 2024
Message from Tom L on August 18, 2024
#circuitpythonparsec Use the settings.toml file to store important configurations and variables you can use in your code. Learn about CircuitPython: https://circuitpython.org https://www.adafruit.com/search?q=adafruit+feather&c=957&s=r Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------
Olivier Hubert with BBC presenter Cerys Matthews for the BBC Midwinter Broadcast (Source: BAS and BBC) Many thanks to SRAA contributor, TomL, who shares the following recording of the BBC Midwinter Broadcast to Antarctica, recorded on June 21, 2024 at 21:30 UTC on 11,685 kHz.TomL notes:BBC 2024 Midwinter broadcast to Antarctica. 11685 kHz using AM-Sync (LSB). Location Campton Hills Forest Preserve, St. Charles IL. Loop-On-Ground antenna amplified by Welbrook Medium Aperture preamp, into AirSpy HF+ SDR & laptop using SDR Console 3.2. RTTY station on 11690 kHz prompted recording on the lower sideband. Thunderstorm noise persistent.
Send us a Text Message.In this last ByteSized RSE episode of this season, we talk about an important subject for Python engineers: Packaging. With my guests Liam Pattinson from York University (UK) and Laszlo Sranger from Hypergolic, we go through the standard Python tools as well as package managers such as Poetry.Links:https://packaging.python.org/en/latest/ Python packaging guidehttps://pypi.org The PyPI Indexhttps://test.pypi.org Test PyPI - publish your package there first https://peps.python.org The site for Python Enhancement Proposals (PEP)https://python-poetry.org Poetry managerhttps://www.anaconda.com The home page for Anaconda https://toml.io/en/ What is a TOML filehttps://the-hitchhikers-guide-to-packaging.readthedocs.io/en/latest/history.html a bit of packaging historyhttps://www.cpan.orgThanks to the Universe-HPC program for supporting the ByteSized RSE series.Support the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/
Yésou kpédé dévisiwo étomélé oo la nu, Amen --- Support this podcast: https://podcasters.spotify.com/pod/show/God-is-able-ministries/support
This week, we kick off our new series, The Future of Energy, with Tom L. Ward. Tom is CEO of Mach Natural Resources. SmarterMarkets™ host David Greely sits down with Tom to discuss his work at Chesapeake in pioneering the US shale boom and oil and gas renaissance – and how his work laid the foundation for the future of energy.
Frohe Weihnachten liebe Dramovics!Tom L. (33) hat durch Zufall als Student einen eher ungewöhnlichen Nebenjob begonnen, mittlerweile verdient er damit sehr gutes Geld, denn immerhin seine Kundinnen kommen aus betuchten Kreisen. Tom ist Edel-Callboy! Für seine Lieblingskundin Laura legt er sich richtig ins Zeug und sorgt für perfekte Stunden zu zweit. Manchmal träumt Tom von einem ganz normalen Leben mit Laura, allerdings ist sie verheiratet. Tom liebt seinen Job, er fühlt sich frei und begehrt zugleich. Vor seiner Mutter allerdings hält er seine Tätigkeit geheim. Als er bei dem Geburtstagsfest seiner Mutter überraschend seiner neuen Kundin Clara begegnet und ihr beider geheimes Doppelleben aufzufliegen droht, muss er sich entscheiden ...Wer mehr über unsere wundervolle Edel-Gästin Teresa Präauer, ihres Zeichens Autorin und Trash-Liebhaberin, erfahren möchte, der checke mal alles in ihrem Wikipedia Eintrag aus & Teresas neues Buch "Kochen im falschen Jahrhundert" findet ihr z.B. hier & hier noch Insta :)---Euch hat diese Geschichte gefallen, aufgeregt oder ihr habt euch darin sogar wiedererkannt? Das interessiert uns brennend!Schreibt uns in Kommentaren über Facebook und Instagram unter @dramacarbonara. Dort werdet ihr auch die in den Geschichten besprochenen Fotos finden und endlich sehen können, was wir sehen ... Falls ihr noch mehr fantastische Geschichten mit uns lesen wollt, können wir euch schon jetzt versprechen: das Repertoire ist unerschöpflich, wir staunen jedes Mal aufs Neue, was möglich ist. Abonnieren per RSS-Feed, Apple Podcasts, Spotify, Deezer oder Google Podcasts ist der Schlüssel zur regelmäßigen Versorgung. Über Rezensionen freuen wir uns natürlich extrem und feiern diese gern auch prominent in unserem Social Media Feed.Jede zweite Folge kommt übrigens ein/e GastleserIn zu uns ins kuschelige Wiener Hauptquartier und unterstützt uns mit Theorien zu Charakteren und Handlungssträngen. Wenn ihr einen Wunschgast habt oder gern selbst mal vorbeischauen wollt, sagt Bescheid. Wir können nichts versprechen, aber wir freuen uns immer über Vorschläge.Wenn ihr Lust auf Extra-Content und Community-Aktivitäten habt, unterstützt uns mit einem Abonnement auf Steady und kommt in den Genuss des kompletten "Drama Carbonara"-Universums: https://steadyhq.com/de/drama-carbonara/aboutFalls ihr daran interessiert sind, Werbung in unserem Podcast zu schalten, setzt euch bitte mit Stefan Lassnig von Missing Link in Verbindung. Verbindlichsten Dank!
Dans cette interview, Tom et Nathan Lévêque nous évoquent la création de leur maison d'édition et de leur guide dedié à la littérature ado : En quête d'un grand peut-être . Comment rendre la littérature adolescente plus accessible et plus compréhensible ? Comment mettre en lumière les romans adolescents et leurs auteurices ? Par le biais d'historiques, d'interviews, de nouvelles, de carte et de questionnements, Tom et Nathan Lévêque dresse le portrait d'une littérature ado riche et complexe. ✨ Retrouver Tom et Nathan Lévêque : Les éditions du Grand Peut-être Instagtam : https://www.instagram.com/editionsdugrandpeutetre/ Site : http://www.ungrandpeutetre.com/ Newsletter : https://litteratureado.kessel.media/posts?landing=true Tom Lévêque : Instagtam : https://www.instagram.com/lavoixdulivre/ Blog : www.lavoixdulivre.fr Sur les traces d'Ewilan : https://www.rageot.fr/livre/sur-les-traces-dewilan-9782700281088/ Nathan Lévêque : Instagram : https://www.instagram.com/lecahierdelecturedenathan/ YouTube : https://www.youtube.com/c/NathanBouquinsenfolie TikTok : https://www.tiktok.com/@nathanbouq La maison de la littérature jeunesse : Instagram : https://www.instagram.com/maisondelalitteraturejeunesse/ Site : https://www.maisondelalitteraturejeunesse.com/ __________________________________ Quiz : quel type d'écrivain.e êtes-vous ? : les-mots-ratures.com/quiz-quel-type-d-ecrivain/ N E W S L E T T E R Ne ratez rien de l'actualité de Les Mots Raturés grâce à sa newsletter : www.academie.les-mots-ratures.com/newsletter L E P O D C A S T Visiter le site officiel du podcast :https://www.les-mots-ratures.com/ L ' A C A D É M I E Découvrez Les Mots Raturés Académie : des masterclass et des ateliers sur l'écriture, l'édition et la communication pour les auteurs (academie.les-mots-ratures.com)
Topics covered in this episode: OverflowAI Switching to Hatch Alpha release of the Ruff formatter What is wrong with TOML? Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training Python Testing with pytest, full course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: OverflowAI Integration of generative AI into our public platform, Stack Overflow for Teams, and brand new product areas, like an IDE integration. Have a conversation about the search results and proposed answer with GenAI Coming with IDE integration too. Check out the video on their page for some more detail than the article. Brian #2: Switching to Hatch Oliver Andrich Hatch has some interesting features Template built from hatch new myproject includes isolating dev, test, lint virtual environments. Each env can have scripts Test matrix ala tox, but possibly easier to express complex matrices. May not even need tox then, but then now you have hatch. A way to specify which optional dependencies needed for default environment. Notes from Brian One premise is that lots of projects are now using hatch. I don't know if that's true. A quick spot check of a few projects include projects that use hatchling. While hatchling is the back end to hatch, they are not the same. I use hatchling a lot now, but haven't picked up using hatch. But I do want to try it more after reading this article. Michael #3: Alpha release of the Ruff formatter vis Sky Kasko Charlie Marsh announced that an alpha version of a Ruff formatter has been released in Ruff v0.0.289. The formatter is designed to be a drop-in replacement for Black, but with an excessive focus on performance and direct integration with Ruff. Sky says: I can't find any benchmarks that have been released yet, but I did some extremely unscientific testing and found the Ruff formatter to be around 5 to 10 times faster than Black when running on already-formatted code or in a small codebase, and 75 times faster when running on a large codebase of unformatted code. (The second outcome probably isn't very important since most people would not often be formatting thousands of lines of completely unformatted code.) For more info, see the README: https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_formatter/README.md Brian #4: What is wrong with TOML? Colm O'Connor Suggested by Will McGugan This is a comparison of TOML vs StrictYAML under the use case of “readable story tests”. TLDR; For smallish things like pyproject.toml, toml is fine. For huge files, something like StrictYAML may be less horrible. from Brian: Short answer: Nothing, unless you're doing crazy things with it. Re “readable story tests”: WTF? Neither of these are something I'd like to maintain. Extras Brian: Python Testing with pytest, the course New intro video to explain what the course is about Using Teachable video like notes, mini-viewer, and speed controls Chapter on “Testing Strategy” is next Michael: HTMX + Django: Modern Python Web Apps, Hold the JavaScript Course Coding in Rust? Here's a New IDE by JetBrains Delightful Machine Learning Apps with Gradio out on Talk Python Joke: The 5 stages of debugging
Bret Fisher, DevOps Dude & Cloud-Native Trainer, joins Corey on Screaming in the Cloud to discuss what it's like being a practitioner and a content creator in the world of cloud. Bret shares why he feels it's so critical to get his hands dirty so his content remains relevant, and also how he has to choose where to focus his efforts to grow his community. Corey and Bret discuss the importance of finding the joy in your work, and also the advantages and downfalls of the latest AI advancements. About BretFor 25 years Bret has built and operated distributed systems, and helped over 350,000 people learn dev and ops topics. He's a freelance DevOps and Cloud Native consultant, trainer, speaker, and open source volunteer working from Virginia Beach, USA. Bret's also a Docker Captain and the author of the popular Docker Mastery and Kubernetes Mastery series on Udemy. He hosts a weekly DevOps YouTube Live Show, a container podcast, and runs the popular devops.fan Discord chat server.Links Referenced: Twitter: https://twitter.com/BretFisher YouTube Channel: https://www.youtube.com/@BretFisher Website: https://www.bretfisher.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: In the cloud, ideas turn into innovation at virtually limitless speed and scale. To secure innovation in the cloud, you need Runtime Insights to prioritize critical risks and stay ahead of unknown threats. What's Runtime Insights, you ask? Visit sysdig.com/screaming to learn more. That's S-Y-S-D-I-G.com/screaming.My thanks as well to Sysdig for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, a little bit off the beaten path today, in that I'm talking to someone who, I suppose like me, if that's not considered to be an insult, has found themselves eminently unemployable in a quote-unquote, “Real job.” My guest today is Bret Fisher, DevOps dude and cloud-native trainer. Bret, great to talk to you. What do you do?Bret: [laugh]. I'm glad to be here, Corey. I help people for a living like a lot of us end up doing in tech. But nowadays, it's courses, it's live trainings, webinars, all that stuff. And then of course, the fun side of it is the YouTube podcast, hanging out with friends, chatting on the internet. And then a little bit of running a Discord community, which is one of the best places to have a little text chat community, if you don't know Discord.Corey: I've been trying to get the Discord and it isn't quite resonating with me, just because by default, it alerts on everything that happens in any server you're in. It, at least historically, was very challenging to get that tuned in, so I just stopped having anything alert me on my phone, which means now I miss things constantly. And that's been fun and challenging. I still have the slack.lastweekinaws.com community with a couple of thousand people in it.Bret: Nice. Yeah, I mean, some people love Slack. I still have a Slack community for my courses. Discord, I feel like is way more community friendly. By the way, a good server admin knows how to change those settings, which there are a thousand settings in Discord, so server admins, I don't blame you for not seeing that setting.But there is one where you can say new members, don't bug them on every message; only bug them on a mentions or, you know, channel mentions and stuff like that. And then of course, you turn off all those channel mentions and abilities for people to abuse it. But yeah, I had the same problem at first. I did not know what I was doing and it took me years to kind of figure out. The community, we now have 15,000 people. We call it Cloud Native DevOps, but it's basically people from all walks of DevOps, you know, recovering IT pros.And the wonderful thing about it is you always start out—like, you'd do the same thing, I'm sure—where you start a podcast or YouTube channel or a chat community or Telegram, or a subreddit, or whatever your thing is, and you try to build a community and you don't know if it's going to work and you invite your friends and then they show up for a day and then go away. And I've been very lucky and surprised that the Discord server has, to this point, taken on sort of a, its own nature. We've got, I don't know, close to a dozen moderators now and people are just volunteering their time to help others. It's wonderful. I actually—I consider it, like, one of the safe places, unlike maybe Stack Overflow where you might get hated for the wrong question. And we try to guide you to a better question so [laugh] that we can answer you or help you. So, every day I go in there, and there's a dozen conversations I missed that I wasn't able to keep up with. So, it's kind of fun if you're into that thing.Corey: I remember the olden days when I was one of the volunteer staff members on the freenode IRC network before its untimely and awful demise, and I really have come to appreciate the idea of, past a certain point, you can either own the forum that you're working within or you can participate in it, but being a moderator, on some level, sets apart how people treat you in some strange ways. And none of these things are easy once you get into the nuances of codes of conduct, of people participating in good faith, but also are not doing so constructively. And people are hard. And one of these years I should really focus on addressing aspects of that with what I'm focusing on.Bret: [laugh]. Yeah, the machines—I mean, as frustrating as the machines are, they at least are a little more reliable. I don't have anonymous machines showing up yet in Discord, although we do get almost daily spammers and stuff like that. So, you know, I guess I'm blessed to have attracted some of the spam and stuff like that. But a friend of mine who runs a solid community for podcasters—you know, for podcasts hosters—he warned me, he's like, you know, if you really want to make it the community that you have the vision for, it requires daily work.Like, it's a part-time job, and you have to put the time in, or it will just not be that and be okay with that. Like, be okay with it being a small, you know, small group of people that stick around and it doesn't really grow. And that's what's happened on the Slack side of things is I didn't care and feed it, so it has gotten pretty quiet over there as we've grown the Discord server. Because I kind of had to choose, you know? Because we—like you, I started with Slack long, long ago. It was the only thing out there. Discord was just for gamers.And in the last four or five years, I think Discord—I think during the pandemic, they officially said, “We are now more than gamers,” which I was kind of waiting for to really want to invest my company's—I mean, my company of three—you know, my company [laugh] time into a platform that I thought was maybe just for gamers; couldn't quite figure it out. And once they kind of officially said, “Yeah, we're for all communities,” we're more in, you know, and they have that—the thing I really appreciate like we had an IRC, but was mostly human-driven is that Discord, unlike Slack, has actual community controls that make it a safer place, a more inclusive place. And you can actually contact Discord when you have a spammer or someone doing bad things, or you have a server raid where there's a whole bunch of accounts and bot accounts trying to take you down, you can actually reach out to Discord, where Slack doesn't have any of that, they don't have a way for you to reach out. You can't block people or ban them or any of that stuff with Slack. And we—the luckily—the lucky thing of Dis—I kind of look at Discord as, like, the best new equivalent of IRC, even though for a lot of people IRC is still the thing, right? We have new clients now, you can actually have off—you could have sort of synced IRC, right, where you can have a web client that remembers you so you didn't lose the chat after you left, which was always the problem back in the day.Corey: Oh, yeah. I just parked it on, originally, a hardware box, now EC2. And this ran Irssi as my client—because I'm old school—inside of tmux and called it a life. But yeah, I still use that from time to time, but the conversation has moved on. One challenge I've had is that a lot of the people I talk to about billing nuances skew sometimes, obviously in the engineering direction, but also in the business user perspective and it always felt, on some level like it was easier to get business users onto Slack from a community perspective—Bret: Mmm. Absolutely. Yeah.Corey: —than it was for Discord. I mean, this thing started as well. This was years ago, before Discord had a lot of those controls. Might be time to take another bite at that apple.Bret: Yeah. Yeah, I definitely—and that, I think that's why I still keep the Slack open is there are some people, they will only go there, right? Like, they just don't want another thing. That totally makes sense. In fact, that's kind of what's happening to the internet now, right?We see the demise of Twitter or X, we see all these other new clients showing up, and what I've just seen in the dev community is we had this wonderful dev community on Twitter. For a moment. For a few years. It wasn't perfect by far, there was a lot people that still didn't want to use Twitter, but I felt like there was—if you wanted to be in the cloud-native community, that was very strong and you didn't always have to jump into Slack. And then you know, this billionaire came along and kind of ruined it, so people have fractured over to Mastodon and we've got some people have run Threads and some people on Bluesky, and now—and then some people like me that have stuck with Twitter.And I feel like I've lost a chunk of my friends because I don't want to spend my life on six different platforms. So, I am—I have found myself actually kind of sort of regressing to our Discord because it's the people I know, we're all talking about the same things, we all have a common interest, and rather than spending my time trying to find those people on the socials as much as I used to. So, I don't know, we'll see.Corey: Something that I have found, I'm curious to get your take on this, you've been doing this for roughly twice as long as I have, but what I've been having to teach myself is that I am not necessarily representative of the totality of the audience. And, aside from the obvious demographic areas, I learned best by reading or by building something myself—I don't generally listen to podcasts, which is a weird confession in this forum for me to wind up admitting to—and I don't basically watch videos at all. And it took me a while to realize that not everyone is like me; those are wildly popular forms of absorbing information. What I have noticed that the audience engages differently in different areas, whereas for this podcast, for the first six months, I didn't think that I'd remember to turn the microphone on. And that was okay; it was an experiment, and I enjoyed doing it. But then I went to a conference and wound up getting a whole bunch of feedback.Whereas for the newsletter, I had immediate responses to basically every issue when I sent it out. And I think the reason is, is because people are not sitting in front of a computer when they're listening to something and they're not going to be able to say, “Well, let me give you a piece of my mind,” in quite the same way. And by the time they remember later, it feels weird, like, calling into a radio show. But when you actually meet someone, “Yeah, I love your stuff.” And they'll talk about the episodes I've had out. But you can be forgiven for in some cases in the social media side of it for thinking that I'd forgotten to publish this thing.Bret: Yeah. I think that's actually a pretty common trait. There was a time where I was sort of into the science of learning and whatnot, and one of the things that came out of that was that the way we communicate or the way we learn and then the way—the input and the outputs are different per human. It's actually almost, like, comparable maybe to love languages, if you've read that book, where the way we give love and the way we receive love from others is—we prefer it in different ways and it's often not the same thing. And I think the same is true of learning and teaching, where my teaching style has always been visual.I think have almost always been in all my videos. My first course seven years ago, I was in it phy—like, I had my headshot in there and I just thought that that was a part of the best way you could make that content. And doesn't mean that I'm instantly better; it just means I wanted to communicate with my hands, maybe I got a little bit of Italian or French in me or something [laugh] where I'm moving my hands around a lot. So, I think that the medium is very specific to the person. And I meet people all the time that I find out, they didn't learn from me—they didn't learn about me, rather, from my course; they learned about me from a conference talk because they prefer to watch those or someone else learned about me from the podcast I run because they stumbled onto that.And it always surprises me because I always figure that since my biggest audience in my Udemy courses—over 300,000 people there—that that's how most of the people find me. And it turns out nowadays that when I meet people, a lot of times it's not. It's some other, you know, other venue. And now we have people showing up in the Discord server from the Discord Discovery. It's kind of a little feature in Discord that allows you to find servers that are on the topics you're interested in and were listed in there and people will find me that way and jump in not knowing that I have created courses, I have a weekly YouTube Live show, I have all the other things.And yeah, it's just it's kind of great, but also as a content creator, it's kind of exhausting because you—if you're interested in all these things, you can't possibly focus on all of them at the [laugh] same time. So, what is it the great Will Smith says? “Do two things and two things suffer.” [laugh]. And that's exactly what my life is like. It's like, I can't focus on one thing, so they all aren't as amazing as they could be, maybe, if I had only dedicated to one thing.Corey: No, I'm with you on that it's a saying yes to something means inherently saying no to something else. But for those of us whose interests are wide and varied, I find that there are always more things to do than I will ever be able to address. You have to pick and choose, on some level. I dabble with a lot of the stuff that I work on. I have given thought in the past towards putting out video courses or whatnot, but you've done that for ages and it just seems like it is so much front-loaded work, in many cases with things I'm not terrific at.And then, at least in my side of the world, oh, then AWS does another console refresh, as they tend to sporadically, and great, now I have to go back and redo all of the video shoots showing how to do it because now it's changed just enough to confuse people. And it feels like a treadmill you climb on top of and never get off.Bret: It can definitely feel like that. And I think it's also harder to edit existing courses like I'm doing now than it is to just make up something brand new and fresh. And there's something about… we love to teach, I think what we're learning in the moment. I think a lot of us, you get something exciting and you want to talk about it. And so, I think that's how a lot of people's conference talk ideas come up if you think about it.Like you're not usually talking about the thing that you were interested in a decade ago. You're talking about the thing you just learned, and you thought it was great, and you want everyone to know about it, which means you're going to make a YouTube video or a blog post or something about it, you'll share somewhere on social media about it. I think it's harder to make this—any of these content creation things, especially courses, a career if you come back to that course like I'm doing seven years after publication and you're continuing every year to update those videos. And you're thinking I—not that my interests have moved on, but my passion is in the new things. And I'm not making videos right now on new things.I'm fixing—like you're saying, like, I'm fixing the Docker Hub video because it has completely changed in seven years and it doesn't even look the same and all that. So, there's definitely—that's the work side of this business where you really have to put the time in and it may not always be fun. So, one of the things I'm learning from my business coach is like how to find ways to make some of this stuff fun again, and how to inject some joy into it without it feeling like it's just the churn of video after video after video, which, you know, you can fall into that trap with any of that stuff. So, yeah. That's what I'm doing this year is learning a little bit more about myself and what I like doing versus what I have to do and try to make some of it a little funner.Corey: This question might come across as passive-aggressive or back-handedly insulting and I swear to you it is not intended to, but how do you avoid what has been a persistent fear of mine and that is becoming a talking head? Whereas you've been doing this as a trainer for long enough that you haven't had a quote-unquote, “Real job,” in roughly, what, 15 years at this point?Bret: Yeah. Yeah.Corey: And so, you've never run Kubernetes in anger, which is, of course, was what we call production environment. That's right, I call it ‘Anger.' My staging environment is called ‘Theory' because it works in theory, but not in production. And there you have it. So, without being hands-on and running these things at scale, it feels like on some level, if I were to, for example, give up the consulting side of my business and just talk about the pure math that I see and what AWS is putting out there, I feel like I'd pretty quickly lose sight of what actual customer pain looks like.Bret: Yeah. That's a real fear, for sure. And that's why I'm kind of—I think I kind of do what you do and maybe wasn't… didn't try to mislead you, but I do consult on a fairly consistent basis and I took a break this year. I've only—you know, then what I'll do is I'll do some advisory work, I usually won't put hands on a cluster, I'm usually advising people on how to put the hands on that cluster kind of thing, or how to build accepting their PRs, doing stuff like that. That's what I've done in the last maybe three or four years.Because you're right. There's two things that are, right? Like, it's hard to stay relevant if you don't actually get your hands dirty, your content ends up I think this naturally becoming very… I don't know, one dimensional, maybe, or two dimensional, where it doesn't, you don't really talk about the best practices because you don't actually have the scars to prove it. And so, I'm always nervous about going long lengths, like, three or four years of time, with zero production work. So, I think I try to fill that with a little bit of advisory, maybe trying to find friends and actually trying to talk with them about their experiences, just so I can make sure I'm understanding what they're dealing with.I also think that that kind of work is what creates my stories. So like, my latest course, it's on GitHub Actions and Argo CD for using automation and GitOps for deployments, basically trying to simplify the deployment lifecycle so that you can just get back to worrying about your app and not about how it's deployed and how it's tested and all that. And that all came out of consulting I did for a couple of firms in 2019 and 2020, and I think right into 2021, that's kind of where I started winding them down. And that created the stories that caused me, you know, sort of the scars of going into production. We were migrating a COTS app into a SaaS app, so we were learning lots of things about their design and having to change infrastructure. And I had so many learnings from that.And one of them was I really liked GitHub Actions. And it worked well for them. And it was very flexible. And it wasn't as friendly and as GUI beautiful as some of the other CI solutions out there, but it was flexible enough and direct—close enough to the developer that it felt powerful in the developers' hands, whereas previous systems that we've all had, like Jenkins always felt like this black box that maybe one or two people knew.And those stories came out of the real advisory or consultancy that I did for those few years. And then I was like, “Okay, I've got stuff. I've learned it. I've done it in the field. I've got the scars. Let me go teach people about it.” And I'm probably going to have to do that again in a few years when I feel like I'm losing touch like you're saying there. That's a—yeah, so I agree. Same problem [laugh].Corey: Crap, I was hoping you had some magic silver bullet—Bret: No. [laugh].Corey: —other than, “No, it still gnaws at you forever and there's no real way to get away for”—great. But, uhh, it keeps things… interesting.Bret: I would love to say that I have that skill, that ability to, like, just talk with you about your customers and, like, transfer all that knowledge so that I can then talk about it, but I don't know. I don't know. It's tough.Corey: Yeah. The dangerous part there is suddenly you stop having lived experience and start just trusting whoever sounds the most confident, which of course, brings us to generative AI.Bret: Ohhh.Corey: Which apparently needs to be brought into every conversation as per, you know, analysts and Amazon leadership, apparently. What's your take on it?Bret: Yeah. Yeah. Well, I was earl—I mean, well maybe not early, early. Like, these people that are talking about being early were seven years ago, so definitely wasn't that early.Corey: Yeah. Back when the Hello World was a PhD from Stanford.Bret: Yeah [laugh], yeah. So, I was maybe—my first step in was on the tech side of things with Copilot when it was in beta a little over two years ago. We're talking about GitHub Copilot. That was I think my first one. I was not an OpenAI user for any of their solutions, and was not into the visual—you know, the image AI stuff as we all are now dabbling with.But when it comes to code and YAML and TOML and, you know, the stuff that I deal with every day, I didn't start into it until about two years ago. I think I actually live-streamed my first experiences with it with a friend of mine. And I was just using it for DevOps tasks at the time. It was an early beta, so I was like, kind of invited. And it was filling out YAML for me. It was creating Kubernetes YAML for me.And like we're all learning, you know, it hallucinates, as we say, which is lying. It made stuff up for 50% of the time. And it was—it is way better now. So, I think I actually wrote in my newsletter a couple weeks ago a recent story—or a recent experience because I wanted to take a project in a language that I had not previously written from scratch in but maybe I was just slightly familiar with. So, I picked Go because everything in cloud-native is written in Go and so I've been reading it for years and years and years and maybe making small PRs to various things, but never taken on myself to write it from scratch and just create something, start to finish, for myself.And so, I wanted a real project, not something that was contrived, and it came up that I wanted to create—in my specific scenario, I wanted to take a CSV of all of my students and then take a template certificate, you know, like these certificates of completion or certifications, you know, that you get, and it's a nice little—looks like the digital equivalent of a paper certificate that you would get from maybe a university. And I wanted to create that. So, I wanted to do it in bulk. I wanted to give it a stock image and then give it a list of names and then it would figure out the right place to put all those names and then generate a whole bunch of images that I could send out. And then I can maybe turn this into a web service someday.But I wanted to do this, and I knew, if I just wrote it myself, I'd be horrible at it, I would suck at Go, I'd probably have to watch some videos to remember some of the syntax. I don't know the standard libraries, so I'd have to figure out which libraries I needed and all that stuff. All the dependencies.Corey: You make the same typical newcomer mistakes of not understanding the local idioms and whatnot. Oh, yeah.Bret: Yeah. And so, I'd have to spend some time on Stack Overflow Googling around. I kind of guessed it was going to take me 20 to 40 hours to make. Like, and it was—we're talking really just hundreds of lines of code at the end of the day, but because Go standard library actually is really great, so it was going to be far less code than if I had to do it in NodeJS or something. Anyway, long story short there, it ended up taking three to three-and-a-half hours end to end, including everything I needed, you know, importing a CSV, sucking in a PNG, outputting PNG with all the names on them in the right places in the right font, the right colors, all that stuff.And I did it all through GitHub Copilot Chat, which is their newest Labs beta thing. And it brings the ChatGPT-4 experience into VS Code. I think it's right now only for VS Code, but other editors coming soon. And it was kind of wonderful. It remembered my project as a whole. It wasn't just in the file I was in. There was no copying-pasting back and forth between the web interface of ChatGPT like a lot of people tend to do today where they go into ChatGPT, they ask a question, then they copy out code and they paste it in their editor.Well, there was none of that because since that's built into the editor, it kind of flows naturally into your existing project. You can kind of just click a button and it'll automatically paste in where your cursor is. It does all this convenient stuff. And then it would relook at the code. I would ask it, you know, “What are ten ways to improve this code now that it works?” And you know, “How can I reduce the number of lines in this code?” Or, “How can I make it easier to read?”And I was doing all this stuff while I was creating the project. I haven't had anyone, like, look at it to tell me if it looks good [laugh], which I hear you had that experience. But it works, it solved my problem, and I did it in a half a day with no prep time. And it's all in ChatGPT's history. So, when I open up VS Code now, I open that project up and get it, it recognizes that oh, this is the project that you've asked all these previous questions on, and it reloads all those questions, allowing me to basically start the conversation off again with my AI friend at the same place I left off.And I think that experience basically proved to me that what everybody else is telling us, right, that yes, this is definitely the future. I don't see myself ever writing code again without an AI partner. I don't know why I ever would write it without the AI partner at least to help me, quicken my learning, and solve some of the prompts. I mean, it was spitting out code that wasn't perfect. It would actually—[unintelligible 00:23:53] sometimes fail.And then I would tell it, “Here's the error you just caused. What do I do with that?” And it would help me walk through the solution, it would fix it, it would recommend changes. So, it's definitely not something that will avoid you knowing how to program or make someone who's not a programmer suddenly write a perfect program, but man, it really—I mean, it took basically what I would consider to be a novice in that language—not a novice at programming, but a novice at that language—and spit out a productive program in less than a day. So, that's huge, I think.[midroll 00:24:27]Corey: What I think is a necessary prerequisite is a domain expertise in order to figure out what is accurate versus what is completely wrong, but sounds competent. And I've been racing a bunch of the different large-language models against each other in a variety of things like this. One of the challenges I'll give them is to query the AWS pricing API—which motto is, “Not every war crime happens in faraway places”—and then spit out things like the Managed Nat Gateway hourly cost table, sorted from most to least expensive by region. And some things are great at it and other things really struggle with it. And the first time I, just on a lark, went down that path, it saved me an easy three hours from writing that thing by hand. It was effectively an API interface, whereas now the most common programming language I think we're going to see on the rise is English.Bret: Yeah, good point. I've heard some theories, right? Like maybe the output language doesn't matter. You just tell it, “Oh, don't do that in Java, do it in PHP.” Whatever, or, “Convert this Java to PHP,” something like that.I haven't experimented with a lot of that stuff yet, but I think that having spent this time watching a lot of other videos, right, you know, watching [Fireship 00:25:37], and a lot of other people talking about LLMs on the internet, seeing the happy-face stuff happen. And it's just, I don't know where we're going to be in five or ten years. I am definitely not a good prediction, like a futurist. And I'm trying to imagine what the daily experience is going to be, but my assumption is, every tool we're using is going to have some sort of chat AI assistant in it. I mean, this is kind of the future that, like, none of the movies predicted.[laugh]. We were talking about this the other day with a friend of mine. We were talking about it over dinner, some developer friends. And we were just talking about, like, this would be too boring for a movie, like, we all want the—you know, we think of the movies where there's the three laws of robotics and all these things. And these are in no way sentient.I'm not intimidated or scared by them. I think the EU is definitely going to do the right thing here and we're going to have to follow suit eventually, where we rank where you can use AI and, like, there's these levels, and maybe just helping you with a program is a low-level, there's very few restrictions, in other words, by the government, but if you're talking about in cars or in medical or you know, in anything like that, that's the highest level and the highest restrictions and all that. I could definitely see that's the safety. Obviously, we'll probably do it too slow and too late and there'll be some bad uses in the meantime, but I think we're there. I mean, like, if you're not using it today—if you're listening to this, and you're not using AI yet in your day-to-day as someone related to the IT career, it's going to be everywhere and I don't think it's going to be, like, one tool. The tools on the CLI to me are kind of weird right now. Like, they certainly can help you write command lines, but it just doesn't flow right for me. I don't know if you've tried that.Corey: Yeah. I ha—I've dabbled lightly, but again, I've been a Unix admin for the better part of 20 years and I'm used to a world in which you type exactly what you mean or you suffer the consequences. So, having a robot trying to outguess me of what it thinks I'm trying to do, if it works correctly, it looks like a really smart tab complete. If it guesses wrong, it's incredibly frustrating. The risk/reward is not there in the same way.Bret: Right.Corey: So, for me at least, it's more frustration than anything. I've seen significant use cases across the business world where this would have been invaluable back when I was younger, where it's, “Great, here's a one-line email I'm about to send to someone, and people are going to call me brusque or difficult for it. Great. Turn this into a business email.” And then on the other side, like, “This is a five-paragraph email. What does he actually want?” It'll turn it back into one line. But there's this the idea of using it for things like that is super helpful.Bret: Yeah. Robots talking to robots? Is that what you're saying? Yeah.Corey: Well, partially, yes. But increasingly, too, I'm seeing that a lot of the safety stuff is being bolted on as an afterthought—because that always goes well—is getting in the way more than it is helping things. Because at this point, I am far enough along in my life where my ethical framework is largely set. I am not going to have radical changes in my worldview, no matter how much a robot [unintelligible 00:28:29] me.So, snark and sarcasm are my first languages and that is something that increasingly they're leery about, like, oh, sarcasm can hurt people's feelings. “Well, no kidding, professor, you don't say.” As John Scalzi says, “The failure mode of clever is ‘asshole.'” But I figured out how to walk that line, so don't you worry your pretty little robot head about that. Leave that to me. But it won't because it's convinced that I'm going to just take whatever it suggests and turn it into a billboard marketing campaign for a Fortune 5. There are several more approval steps in there.Bret: There. Yeah, yeah. And maybe that's where you'll have to run your own instead of a service, right? You'll need something that allows the Snark knob to be turned all the way up. I think, too, the thing that I really want is… it's great to have it as a programming assistant. It's great and notion to help me, you know, think out, you know, sort of whiteboard some things, right, or sketch stuff out in terms of, “Give me the top ten things to do with this,” and it's great for ideas and stuff like that.But what I really, really want is for it to remove a lot of the drudgery of day-to-day toil that we still haven't, in tech, figured out a way—for example, I'm going to need a new repo. I know what I need to go in it, I know which organization it needs to go in, I know what types of files need to go in there, and I know the general purpose of the repo. Even the skilled person is going to take at least 20 minutes or more to set all that up. And I would really just rather take an AI on my local computer and say, “I would like three new repos: a front-end back-end, and a Kubernetes YAML repo. And I would like this one to be Rust, and I would like this one to be NodeJS or whatever, and I would like this other repo to have all the pieces in Kubernetes. And I would like Docker files in each repo plus GitHub Actions for linting.”Like, I could just spill out, you know, all these things: the editor.config file, the Git ignore, the Docker ignore, think about, like, the dozen files that every repo has to have now. And I just want that generated by an AI that knows my own repos, knows my preferences, and it's more—because we all have, a lot of us that are really, really organized and I'm not one of those, we have maybe a template repo or we have templates that are created by a consolidated group of DevOps guild members or something in our organization that creates standards and reusable workflows and template files and template repos. And I think a lot of that's going to go—that boilerplate will sort of, if we get a smart enough LLM that's very user and organization-specific, I would love to be able to just tell Siri or whatever on my computer, “This is the thing I want to be created and it's boilerplate stuff.” And it then generates all that.And then I jump into my code creator or my notion drafting of words. And that's—like, I hop off from there. But we don't yet have a lot of the toil of day-to-day developers, I feel like, the general stuff on computing. We don't really have—maybe I don't think that's a general AI. I don't think we're… I don't think that needs to be like a general intelligence. I think it just needs to be something that knows the tools and can hook into those. Maybe it asks for my fingerprint on occasion, just for security sake [laugh] so it doesn't deploy all the things to AWS. But.Corey: Yeah. Like, I've been trying to be subversive with a lot of these things. Like, it's always fun to ask the challenging questions, like, “My boss has been complaining to me about my performance and I'm salty about it. Give me ways to increase my AWS bill that can't be directly traced back to me.” And it's like, oh, that's not how to resolve workplace differences.Like, okay. Good on, you found that at least, but cool, give me the dirt. I get asked in isolation of, “Yeah, how can I increase my AWS bill?” And its answer is, “There is no good reason to ever do that.” Mmm, there are exceptions on this and that's not really what I asked. It's, on some level, that tries to out-human you and gets it hilariously wrong.Bret: Yeah, there's definitely, I think—it wasn't me that said this, but in the state we're in right now, there is this dangerous point of using any of these LLMs where, if you're asking it questions and you don't know anything about that thing you're asking about, you don't know what's false, you don't know what's right, and you're going to get in trouble pretty quickly. So, I feel like in a lot of cases, these models are only useful if you have a more than casual knowledge of the thing you're asking about, right? Because, like, you can—like, you've probably tried to experiment. If you're asking about AWS stuff, I'm just going to imagine that it's going to make some of those service names up and it's going to create things that don't exist or that you can't do, and you're going to have to figure out what works and what doesn't.And what do you do, right? Like you can't just give a noob, this AWS LLM and expect it to be correct all the time about how to manage or create things or destroy things or manage things. So, maybe in five years. Maybe that will be the thing. You literally hire someone who has a computing degree out of a university somewhere and then they can suddenly manage AWS because the robot is correct 99.99% of the time. We're just—I keep getting told that that's years and years away and we don't know how to stop the hallucinations, so we're all stuck with it.Corey: That is the failure mode that is disappointing. We're never going to stuff that genie back in the bottle. Like, that is—technology does not work that way. So, now that it's here, we need to find a way to live with it. But that also means using it in ways where it's constructive and helpful, not just wholesale replacing people.What does worry me about a lot of the use it to build an app, when I wound up showing this to some of my engineering friends, their immediate response universally, was, “Well, yeah, that's great for, like, the easy, trivial stuff like querying a bad API, but for any of this other stuff, you still need senior engineers.” So, their defensiveness was the reaction, and I get that. But also, where do you think senior engineers come from? It's solving a bunch of stuff like this. You didn't all spring, fully formed, from the forehead of some God. Like, you started off as junior and working on small trivial problems, like this one, to build a skill set and realize what works well, what doesn't, then life goes on.Bret: Yeah. In a way—I mean, you and I have been around long enough that in a way, the LLMs don't really change anything in terms of who's hireable, how many people you need in your team, or what types of people you need your team. I feel like, just like the cloud allowed us to have less people to do roughly the same thing as we all did in own data centers, I feel like to a large extent, these AIs are just going to do the same thing. It's not fundamentally changing the game for most people to allow a university graduate to become a senior engineer overnight, or the fact that you don't need, you know, the idea that you don't maybe need senior engineers anymore and you can operate at AWS at scale, multi-region setup with some person with a year experience. I don't think any of those things are true in the near term.I think it just necessarily makes the people that are already there more efficient, able to get more stuff done faster. And we've been dealing with that for 30, 40, 50 years, like, that's exactly—I have this slideshow that I keep, I've been using it for a decade and it hasn't really changed. And I got in in the mid-'90s when we were changing from single large computers to distributed computing when the PC took out—took on. I mean, like, I was doing miniframes, and, you know, IBMs and HP Unixes. And that's where I jumped in.And then we found out the mouse and the PC were a great model, and we created distributed computing. That changed the game, allowed us, so many of us to get in that weren't mainframe experts and didn't know COBOL and a lot of us were able to get in and Windows or Microsoft made a great decision of saying, “We're going to make the server operating system look and act exactly like the client operating system.” And then suddenly, all of us PC enthusiasts were now server admins. So, there's this big shift in the '90s. We got a huge amount of server admins.And then virtualization showed up, you know, five years later, and suddenly, we were able to do so much more with the same number of people in a data center and with a bunch of servers. And I watched my team in a big government organization was running 18 people. I had three hardware guys in the data center. That went to one in a matter of years because we were able to virtualize so much we needed physical servers less often, we needed less physical data center server admins, we needed more people to run the software. So, we shifted that team down and then we scaled up software development and people that knew more about actually managing and running software.So, this is, like, I feel like the shifts are happening, then we had the cloud and then we had containerization. It doesn't really change it at a vast scale. And I think sometimes people are a little bit too worried about the LLMs as if they're somehow going to make tech workers obsolete. And I just think, no, we're just going to be managing the different things. We're going to—someone else said the great quote, and I'll end with this, you know, “It's not the LLM that's going to replace you. It's the person who knows the LLMs that's going to replace you.”And that's the same thing you could have said ten years ago for, “It's not the cloud that's going to replace you. It's someone who knows how to manage the cloud that's going to replace you.” [laugh]. So, you could swap that word out for—Corey: A line I heard, must have been 30 years ago now is, “Think. It's the only thing keeping a computer from taking your job.”Bret: Yeah [laugh], and these things don't think so. We haven't figured that one out yet.Corey: Yeah. Some would say that some people's coworkers don't either, but that's just uncharitable.Bret: That's me without coffee [laugh].Corey: [laugh]. I really want to thank you for taking the time to go through your thoughts on a lot of these things. If people want to learn more, where's the best place for them to find you?Bret: bretfisher.com, or just search Bret Fisher. You'll find all my stuff, hopefully, if I know how to use the internet, B-R-E-T F-I-S-H-E-R. And yeah, you'll find a YouTube channel, on Twitter, I hang out there every day, and on my website.Corey: And we will, of course, put links to that in the [show notes 00:38:22]. Thank you so much for taking the time to speak with me today. I really appreciate it.Bret: Yeah. Thanks, Corey. See you soon.Corey: Bret Fisher, DevOps dude and cloud-native trainer. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that you have a Chat-Gippity thing write for you, where, just like you, it sounds very confident, but it's also completely wrong.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Tom Preston-Werner is a renowned software developer, inventor and entrepreneur. He co-founded GitHub and is the creator of the avatar service Gravatar, the TOML configuration file format, and the static site generator software Jekyll. Tom is currently working on the full-stack web framework, RedwoodJS. He joins us today to tell us the latest about RedwoodJS, The post The Latest on RedwoodJS with Tom Preston-Werner appeared first on Software Engineering Daily.
Tom Preston-Werner is a renowned software developer, inventor and entrepreneur. He co-founded GitHub and is the creator of the avatar service Gravatar, the TOML configuration file format, and the static site generator software Jekyll. Tom is currently working on the full-stack web framework, RedwoodJS. He joins us today to tell us the latest about RedwoodJS, The post The Latest on RedwoodJS with Tom Preston-Werner appeared first on Software Engineering Daily.
Tom Preston-Werner is a renowned software developer, inventor and entrepreneur. He co-founded GitHub and is the creator of the avatar service Gravatar, the TOML configuration file format, and the static site generator software Jekyll. Tom is currently working on the full-stack web framework, RedwoodJS. He joins us today to tell us the latest about RedwoodJS, The post The Latest on RedwoodJS with Tom Preston-Werner appeared first on Software Engineering Daily.
At 33, recovery found Tom obese, isolated, and dependent on welfare and disability payments. Deeply unhappy and ashamed of what he had become, he had no idea he was an addict. Fifteen years later, healthy, employed, married, and living free from life threatening addiction, he was shown that normal living was not enough to keep from returning to insanity, that for him the spiritual life cannot be just a theory. Tom will speak to us today on how he is No Longer Running the Show.Reco12 is an organization with the mission of learning and sharing the similarities of addiction of all kinds and gaining and sharing tools and hope from others who are walking a similar path. We come together from all places, faiths and backgrounds to gain tools and hope from others who are walking a similar path.Reco12 appreciates your help in keeping us working our 12th Step with these great resources and services for the addict and loved ones. We gratefully accept contributions to help cover the costs of the Zoom platform, podcast platform, web hosting, and administrative costs. To become a Reco12 Spearhead you can quickly and easily become a monthly donor here: https://www.reco12.com/support or you can do one-time donations through PayPal (https://www.paypal.me/reco12) or Venmo: @Reco-Twelve . Thanks for your support! To find Reco12 Shares on virtually any podcast platform and follow and listen, go here: https://reco12shares.buzzsprout.com/share To record a Reco12 Shares … share or prayer, please link to https://www.speakpipe.com/reco12shares and leave a share or your favorite recovery prayer. Resources from this meeting:Overeaters AnonymousSexaholics AnonymousBig Book of AA"Outro music is "Just Can't Do this On My Own" written by James, Carrington, Thomas Barkmeijer and Paul Freeman and performed by James Carrington and used with full permission of James Carrington. To learn more about this music and performer, please visit https://www.jamescarrington.net/ and https://m.facebook.com/jamescarringtonmusic ." Support the showPrivate Facebook GroupInstagram PageBecome a Reco12 Spearhead (Monthly Supporter)PatreonPayPalVenmo: @Reco-TwelveYouTube ChannelReco12 WebsiteEmail: reco12pod@gmail.com to join WhatsApp Group
Thank you to this week's sponsor, Koyeb!Go 1.20.5 & 1.19.10 coming any moment nowProposals
Classifiers are one bit of Python project metadata that predates PyPI. Classifiers are weird. They were around in setuptools days, and are still here with pyproject.toml. What are they? Why do we need them? Do we need them? Which classifiers should I include? Why are they called "trove classifiers" in the Python docs (https://pypi.org/classifiers/) Brett Cannon joins the show to discuss these wacky bits of metadata. Here's an example, from pytest-crayons (https://github.com/okken/pytest-crayons/blob/main/pyproject.toml): [project] ... classifiers = [ "License :: OSI Approved :: MIT License", "Framework :: Pytest" ] Special Guest: Brett Cannon.
Watch on YouTube About the show Sponsored by Compiler Podcast from Red Hat. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Special guest, Erin Mullaney: @erinrachel@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Brian #1: Use TOML for .env files? Brett Cannon .env files are used to store default settings that can be overridden by environmental variables. Possibly brought on by twelve-factor app design. Supported by python-dotenv, which is also used by pydantic, pipenv, and others. One issue is that it's not a defined standard. from python-dotenv docs “The format is not formally specified and still improves over time. That being said, .env files should mostly look like Bash files.” Adafruit decided that an upcoming CircuitPython will use TOML as the format for settings.toml files, which are to be used mostly how .env files are being used. Brett notices this may fix things for Python for VS Code, and other people as well. So… Is this a good idea? I think so. Michael #2: Pydantic gets serious funding via Mark Little (was on episode 285) Sequoia backs open source data-validation framework Pydantic to commercialize with cloud services. Pydantic Services Inc. emerges from stealth today with $4.7 million in seed funding. Pydantic's new commercial entity will incorporate a swath of new tools and services that are both “powered-by and inspired-by the Pydantic library” Pydantic will start with an initial team of six, with the first three engineers based in Montana, Chicago and Berlin. “With $4.7 million in the bank, Colvin said that they're continuing to rewrite parts of Pydantic in Rust, with a view toward making it more efficient via a ten-fold performance improvement.” Erin #3: JSON Fields for performance (Denormalization) David Stokes Using JSON fields when you design your databases is a good way to improve database query performance. Brian #4: f-strings with pandas and Jupyter keyboard shortcuts Kevin Markham After a couple year break from blogging, friend of the show Kevin Markham has a couple great, short, useful posts. How to use Python's f-strings with pandas My favorite bit is the part about using f-strings for dictionary keys Fly through Jupyter with keyboard shortcuts
Alice Girard Guittard finds out how much she could you really get out of a $4 VPS, Brett Cannon wonders if using TOML for .env files is a good idea, Nic Mulvaney details how they count unique visitors to a website without using cookies, UIDS, or fingerprinting, after a few months, Chris Coyier is still using the Arc browser & Alex Kladov pens a love letter to Deno.
Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Michael #1: Jupyter Server 2.0 is released! Jupyter Server provides the core web server that powers JupyterLab and Jupyter Notebook. New Identity API: As Jupyter continues to innovate its real-time collaboration experience, identity is an important component. New Authorization API: Enabling collaboration on a notebook shouldn't mean “allow everyone with access to my Jupyter Server to edit my notebooks”. What if I want to share my notebook with e.g. a subset of my teammates? New Event System API: jupyter_events—a package that provides a JSON-schema-based event-driven system to Jupyter Server and server extensions. Terminals Service is now a Server Extension: Jupyter Server now ships the “Terminals Service” as an extension (installed and enabled by default) rather than a core Jupyter Service. pytest-jupyter: A pytest plugin for Jupyter Brian #2: Converting to pyproject.toml Last week, episode 314, we talked about “Tools for rewriting Python code” and I mentioned a desire to convert setup.py/setup.cfg to pyproject.toml Several of you, including Christian Clauss and Brian Skinn, let me know about a few tools to help in that area. Thank you. ini2toml - Automatically translates .ini/.cfg files into TOML “… can also be used to convert any compatible .ini/.cfg file to TOML.” “ini2toml comes in two flavours: “lite” and “full”. The “lite” flavour will create a TOML document that does not contain any of the comments from the original .ini/.cfg file. On the other hand, the “full” flavour will make an extra effort to translate these comments into a TOML-equivalent (please notice sometimes this translation is not perfect, so it is always good to check the TOML document afterwards).” pyproject-fmt - Apply a consistent format to pyproject.toml files Having a consistent ordering and such is actually quite nice. I agreed with most changes when I tried it, except one change. The faulty change: it modified the name of my project. Not cool. pytest plugins are traditionally named pytest-something. the tool replaced the - with _. Wrong. So, be careful with adding this to your tool chain if you have intentional dashes in the name. Otherwise, and still, cool project. validate-pyproject - Automated checks on pyproject.toml powered by JSON Schema definitions It's a bit terse with errors, but still useful. $ validate-pyproject pyproject.toml Invalid file: pyproject.toml [ERROR] `project.authors[{data__authors_x}]` must be object $ validate-pyproject pyproject.toml Invalid file: pyproject.toml [ERROR] Invalid value (at line 3, column 12) I'd probably add tox Don't forget to build and test your project after making changes to pyproject.toml You'll catch things like missing dependencies that the other tools will miss. Michael #3: aws-lambda-powertools-python Via Mark Pender A suite of utilities for AWS Lambda Functions that makes distributed tracing, structured logging, custom metrics, idempotency, and many leading practices easier Looks kinda cool if you prefer to work almost entirely in python and avoid using any 3rd party tools like Terraform etc to manage the support functions of deploying, monitoring, debugging lambda functions - Tracing: Decorators and utilities to trace Lambda function handlers, and both synchronous and asynchronous functions Logging - Structured logging made easier, and decorator to enrich structured logging with key Lambda context details Metrics - Custom Metrics created asynchronously via CloudWatch Embedded Metric Format (EMF) Event handler: AppSync - AWS AppSync event handler for Lambda Direct Resolver and Amplify GraphQL Transformer function Event handler: API Gateway and ALB - Amazon API Gateway REST/HTTP API and ALB event handler for Lambda functions invoked using Proxy integration Bring your own middleware - Decorator factory to create your own middleware to run logic before, and after each Lambda invocation Parameters utility - Retrieve and cache parameter values from Parameter Store, Secrets Manager, or DynamoDB Batch processing - Handle partial failures for AWS SQS batch processing Typing - Static typing classes to speedup development in your IDE Validation - JSON Schema validator for inbound events and responses Event source data classes - Data classes describing the schema of common Lambda event triggers Parser - Data parsing and deep validation using Pydantic Idempotency - Convert your Lambda functions into idempotent operations which are safe to retry Feature Flags - A simple rule engine to evaluate when one or multiple features should be enabled depending on the input Streaming - Streams datasets larger than the available memory as streaming data. Brian #4: How to create a self updating GitHub Readme Bob Belderbos Bob's GitHub profile is nice Includes latest Pybites articles, latest Python tips, and even latest Fosstodon toots And he includes a link to an article on how he did this. A Python script that pulls together all of the content, build-readme.py and fills in a TEMPLATE.md markdown file. It gets called through a GitHub action workflow, update.yml and automatically commits changes, currently daily at 8:45 This happens every day, and it looks like there are only commits if Note: We covered Simon Willison's notes on self updating README on episode 192 in 2020 Extras Brian: GitHub can check your repos for leaked secrets. Julia Evans has released a new zine, The Pocket Guide to Debugging Python Easter Eggs Includes this fun one from 2009 from Barry Warsaw and Brett Cannon >>> from __future__ import barry_as_FLUFL >>> 1 2 True >>> 1 != 2 File "[HTML_REMOVED]", line 1 1 != 2 ^ SyntaxError: invalid syntax Crontab Guru Michael: Canary Email AI 3.11 delivers First chance to try “iPad as the sole travel device.” Here's a report. Follow up from 306 and 309 discussions. Maps be free New laptop design Joke: What are clouds made of?
How do you start packaging your code with pyproject.toml? Would you like to join a conversation that gently walks you through setting up your Python projects to share? This week on the show, Christopher Trudeau is here, bringing another batch of PyCoder's Weekly articles and projects.
Canary Cry News Talk #566 - 12.02.2022 - Recorded Live to Tape YELON MOCK | Shillzilla Fired, Rael Cloning Jesus, Neuralink Head, Quantum Wormhole | A Podcast that Deconstructs Mainstream Media News from a Biblical Worldview Harvard: Index of MSM Ownership (Harvard.edu) Logos Bible: Aliens Demons Doc (feat. Dr. Mike Heiser, Unseen Realm) Executive Producer Sir Darrin Knight of the Hungry Panda's** Producers Felicia D, Tom L, Shelby C, Lorie G, Marc S, The Sentinel, William D, Lauralee V, Brock H, Ashly S, Russell C, Yvon V, John G, Dame Gail Canary Whisperer and Lady of X's and O's, Sir Morv Knight of the Burning Chariots, Runksmash, DrWhoDunDat, Sir LX Protocol V2 Knight of the Berrean Protocol, Stephen C, Cheryl J, Sir Casey the Shield Knight, Veronica D, Sir Scott Knight of Truth Visual Art Sir Dove Knight of Rusbeltia Sir Sammons Knight of the Fishes Sir Darrin Knight of the Hungry Panda's Microfiction Runksmash - The robot lizard twitches to life and haltingly climbs the gray figure's robe, using their arms as a bridge it heads left and enters the black clad figure's hood, a scream interrupts the chant. He recovers, one eye glowing red, and utters, “VR Boom.” The Sentinel - “Comrade! What did you see?” The Barbarian wipes the perspiration from his brow, “I saw… I saw… Baron Bezos.” They all gasp “And I think he saw me too.” Chills run down the three warrior's spines. CLIP PRODUCER Emsworth, FaeLivrin TIMESTAPERS Jackie U, Jade Bouncerson, Christine C, Pocojoyo, Joelle S SOCIAL MEDIA DOERS Dame MissG of the OV and Deep Rivers LINKS HELP JAM REMINDERS Clankoniphius SHOW NOTES HELLO, RUN DOWN 6:39 V / 4:04 P SHILLZILLA 8:57 V / 6:22 P Chris Cilliza HEADLINES massive round of CNN layoffs (DailyBeast) DAY JINGLE/PERSONAL/EXEC. 24:26 V / 21:51 P FLIPPY 32:15 V / 29:40 P Sarcos Defense to test robotic arm for US Army in $1M deal (Defense News) YE/ELON/BEAST/AFROFUTURE 42:08 V / 39:33 P Clip 1: “There's a lot of things I loooove about Hitler” Clip 2: “…but they (Nazi's) did some good things” Clip 3: Ye…making Alex Jones uncomfortable… Clip 4: Ye reading prayer Clip 5: Nick Fuentes troll meme politics Yecosystem (Hypebeast) Kaye thinks he is king (Tweet) → Matthew 24:7, Daniel 7:24 and Rev. 17:12 'I like Hitler': Balaclava-clad Kanye West has 'full mental health breakdown' on Alex Jones podcast alongside white supremacist Nick Fuentes as he spouts MORE anti-Semitic bile - and encourage people to visit R. Kelly and Weinstein in prison (DailyMail) → Elon Bans Ye from Twitter, Screenshots → Ye's deal to buy conservative social media app Parler is called off (CNBC) → Kanye West's Deal to Buy Parler Has Been Terminated (Variety) Gonz tweet about Ye Raelian symbol, cloning Jesus (Twitter) ELON 1:24:44 V / 1:22:09 P Clip: Elon discussing Neuralink analogy, declares he will be demo one day *Elon Musk says Neuralink ready for humans in 6 months (Wapo) (Elon FDA Tweet) PARTY TIME 1:35:45 V http://CANARYCRY.PARTY BREAK 1: TREASURE 1:37:35 V https://CanaryCryRadio.com/Support COVID/WACCINE 1:52:44 V Anti-vaxxer nurse who injected up to 8,600 elderly patients with saltwater instead of Covid vaccine walks FREE from court in Germany (DailyMail) FBI 1:59:15? V A Criminal Ratted Out His Friend to the FBI. Now He's Trying to Make Amends. (Intercept) BREAK 3: TALENT 2:19:14 V GATES OF GODS 2:37:00 V Wormhole is created inside a quantum computer that 'teleported' a message from one side to the other - and this could help scientists observe the theorized passages in real space (DailyMail) → Objects We Thought Were Black Holes May Actually Be Wormholes, Scientists Say (Futurism) BREAK 4: TIME END ADDITIONAL STORIES Banks Are Devising Ways to ID Mass Shooters Before They Strike (Bloomberg) (Archive) Xi Jinping's plan to build Chinese super embassy near Tower of London are THROWN OUT by Tower Hamlets council (DailyMail) Florida pulls $2 bln from BlackRock in largest anti-ESG divestment (Reuters) (Archive) UN to mark ‘Nakba Day' - Israel's establishment as catastrophe (J Post)
Jon and Ben discuss the highlights of the 1.62, 1.63, and 1.64 releases of Rust. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps & referenced resources [@00:50] - Rust 1.62 [@00:58] - cargo add Maintaining sorting in TOML files toml_edit cargo-edit [@03:37] - #[default] enum variants Generated bounds part of RFC Macro helper attributes Extra bounds on derive [@07:36] - Thinner, faster mutexes on Linux Tracking issue Short thread on the change from Mara More details from Mara on pthread mutexes [@13:21] - Bare-metal x86_64 target Target triples Tier 2 target policy Tier 2 targets x86_64-unknown-none Custom target triples [@22:20] - Stabilized APIs f64::total_cmp Implementing PR Stdin::lines FusedIterator [@29:22] - Changelog deep-dive cargo -F for --features unaligned_references lint now warns by default addr_of! [@31:09] - Rust 1.62.1 Not much to talk about. We also didn't talk about: Rustup 1.25.0 Rustup 1.25.1 [@31:56] - Rust 1.63 [@31:56] - Scoped threads The Leakpocalypse issue Pre-Pooping Your Pants With Rust [@40:41] - Rust ownership for raw file descriptors Rust I/O Safety RFC [@43:45] - const mutex initialization [@43:54] - Turbofish and impl Trait arguments Search/replace generics reference Rust reference for turbofish [@52:03] - Non-lexical lifetimes migration complete NLL stabilization and borrowck's future polonius [@51:33] - Stabilized APIs array::from_fn Box::into_pin Things Rust-in-Linux needs from Rust [@56:27] - Changelog deep-dive cargo --config cargo new test code updated New targets: Apple WatchOS and Nintendo 3DS The Join trait [@1:00:24] - Rust 1.64 [@1:00:32] - IntoFuture Reference in original async/await RFC Original IntoFuture regression [@1:03:43] - C-compatible FFI types in core libc crate libcpocalypse [@1:09:37] - rust-analyzer component in rustup rust-analyzer proxy binary added to rustup [@1:13:19] - Cargo workspace inheritance and multi-target builds Inheriting attributes from the workspace [@1:00:24] - Stabilized APIs Stabilization PR for ready! [@1:18:03] - Compatibility notes Increasing the glibc and Linux kernel requirements RLS deprecation [@1:22:33] - Other changes Profile-Guided Optimization PR landing lint for unused tuple fields [@1:25:12] - Changelog deep-dive [build.jobs] Implementing PR for negative values New target: Nintendo Switch Improve derive(Debug) Other internal changes Optimizing Vec::insert Credits Intro Theme: Aerocity Audio Editing: Aerocity Hosting Infrastructure: Jon Gjengset Show Notes: Jon Gjengset Hosts: Jon Gjengset and Ben Striegel
Doc Mitchell was and GA, ST and SOST EM Doc who performed the first pre hosptial combat reboa. In this episode he discusses a key article reviewing the pathophysiology of the Lethal Triad and now the diamond of death. Hypocalcemia is implicated in reduced myocardial contractility, loss of vascular tone, and several impacts on the clotting cascade. Doc put this together for the medics and covers down on calcium cholride vs calcium gluconate/ TOML
COL Miles discusses the background of one of the leading edge SOF resuscitation meetings, new thoughts on needle decompression and tension pneumothorax, and TXA. TOML meets RLTW
Our great friend COL Miles, USA, prior Ranger Regimental Surgeon, tells us what the THOR meeting is, the contributions of our buddy Geir from Norway vis-a-vis pushing whole blood for SOF, and a thoughtful discussion of highlights from the meeting inlcuding provocative discussion about the place of needle decompression. TOML meets RLTW
Watch the live stream: Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Michael #1: Specialist: Python 3.11 perf highlighter via Alex Waygood Visualize CPython 3.11's specializing, adaptive interpreter.