Podcasts about jvm

Virtual machine

  • 296PODCASTS
  • 966EPISODES
  • 1h 4mAVG DURATION
  • 1WEEKLY EPISODE
  • May 16, 2025LATEST
jvm

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about jvm

Show all podcasts related to jvm

Latest podcast episodes about jvm

Inside Java
“Ahead of Time Computation” with Dan Heidinga

Inside Java

Play Episode Listen Later May 16, 2025 23:22


OpenJDK's Project Leyden aims to improve the startup and warmup time of Java applications, for now by shifting computation from those phases to the applications' build time. Java 24 ships with ahead-of-time class loading and linking, which is the first step in that direction. In this episode, we learn about that as well as about Leyden's approach to reach its goals and some features that are available in its early access build plus some that aren't. Nicolai Parlog discusses with Dan Heidinga, who is JVM Runtime Architect at Oracle and, among other things, member of projects Leyden and Valhalla.

Les Cast Codeurs Podcast
LCC 325 - Trier le hachis des concurrents

Les Cast Codeurs Podcast

Play Episode Listen Later May 9, 2025 109:42


Gros épisode qui couvre un large spectre de sujets : Java, Scala, Micronaut, NodeJS, l'IA et la compétence des développeurs, le sampling dans les LLMs, les DTO, le vibe coding, les changements chez Broadcom et Red Hat ainsi que plusieurs nouvelles sur les licences open source. Enregistré le 7 mai 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-325.mp3 ou en vidéo sur YouTube. News Langages A l'occasion de JavaOne et du lancement de Java 24, Oracle lance un nouveau site avec des ressources vidéo pour apprendre le langage https://learn.java/ site plutôt à destination des débutants et des enseignants couvre la syntaxe aussi, y compris les ajouts plus récents comme les records ou le pattern matching c'est pas le site le plus trendy du monde. Martin Odersky partage un long article sur l'état de l'écosystème Scala et les évolutions du language https://www.scala-lang.org/blog/2025/03/24/evolving-scala.html Stabilité et besoin d'évolution : Scala maintient sa position (~14ème mondial) avec des bases techniques solides, mais doit évoluer face à la concurrence pour rester pertinent. Axes prioritaires : L'évolution se concentre sur l'amélioration du duo sécurité/convivialité, le polissage du langage (suppression des “rugosités”) et la simplification pour les débutants. Innovation continue : Geler les fonctionnalités est exclu ; l'innovation est clé pour la valeur de Scala. Le langage doit rester généraliste et ne pas se lier à un framework spécifique. Défis et progrès : L'outillage (IDE, outils de build comme sbt, scala-cli, Mill) et la facilité d'apprentissage de l'écosystème sont des points d'attention, avec des améliorations en cours (partenariat pédagogique, plateformes simples). Des strings encore plus rapides ! https://inside.java/2025/05/01/strings-just-got-faster/ Dans JDK 25, la performance de la fonction String::hashCode a été améliorée pour être principalement constant foldable. Cela signifie que si les chaînes de caractères sont utilisées comme clés dans une Map statique et immuable, des gains de performance significatifs sont probables. L'amélioration repose sur l'annotation interne @Stable appliquée au champ privé String.hash. Cette annotation permet à la machine virtuelle de lire la valeur du hash une seule fois et de la considérer comme constante si elle n'est pas la valeur par défaut (zéro). Par conséquent, l'opération String::hashCode peut être remplacée par la valeur de hash connue, optimisant ainsi les lookups dans les Map immuables. Un cas limite est celui où le code de hachage de la chaîne est zéro, auquel cas l'optimisation ne fonctionne pas (par exemple, pour la chaîne vide “”). Bien que l'annotation @Stable soit interne au JDK, un nouveau JEP (JEP 502: Stable Values (Preview)) est en cours de développement pour permettre aux utilisateurs de bénéficier indirectement de fonctionnalités similaires. AtomicHash, une implémentation Java d'une HashMap qui est thread-safe, atomique et non-bloquante https://github.com/arxila/atomichash implémenté sous forme de version immutable de Concurrent Hash Trie Librairies Sortie de Micronaut 4.8.0 https://micronaut.io/2025/04/01/micronaut-framework-4-8-0-released/ Mise à jour de la BOM (Bill of Materials) : La version 4.8.0 met à jour la BOM de la plateforme Micronaut. Améliorations de Micronaut Core : Intégration de Micronaut SourceGen pour la génération interne de métadonnées et d'expressions bytecode. Nombreuses améliorations dans Micronaut SourceGen. Ajout du traçage de l'injection de dépendances pour faciliter le débogage au démarrage et à la création des beans. Nouveau membre definitionType dans l'annotation @Client pour faciliter le partage d'interfaces entre client et serveur. Support de la fusion dans les Bean Mappers via l'annotation @Mapping. Nouvelle liveness probe détectant les threads bloqués (deadlocked) via ThreadMXBean. Intégration Kubernetes améliorée : Mise à jour du client Java Kubernetes vers la version 22.0.1. Ajout du module Micronaut Kubernetes Client OpenAPI, offrant une alternative au client officiel avec moins de dépendances, une configuration unifiée, le support des filtres et la compatibilité Native Image. Introduction d'un nouveau runtime serveur basé sur le serveur HTTP intégré de Java, permettant de créer des applications sans dépendances serveur externes. Ajout dans Micronaut Micrometer d'un module pour instrumenter les sources de données (traces et métriques). Ajout de la condition condition dans l'annotation @MetricOptions pour contrôler l'activation des métriques via une expression. Support des Consul watches dans Micronaut Discovery Client pour détecter les changements de configuration distribuée. Possibilité de générer du code source à partir d'un schéma JSON via les plugins de build (Gradle et Maven). Web Node v24.0.0 passe en version Current: https://nodejs.org/en/blog/release/v24.0.0 Mise à jour du moteur V8 vers la version 13.6 : intégration de nouvelles fonctionnalités JavaScript telles que Float16Array, la gestion explicite des ressources (using), RegExp.escape, WebAssembly Memory64 et Error.isError. npm 11 inclus : améliorations en termes de performance, de sécurité et de compatibilité avec les packages JavaScript modernes. Changement de compilateur pour Windows : abandon de MSVC au profit de ClangCL pour la compilation de Node.js sur Windows. AsyncLocalStorage utilise désormais AsyncContextFrame par défaut : offrant une gestion plus efficace du contexte asynchrone. URLPattern disponible globalement : plus besoin d'importer explicitement cette API pour effectuer des correspondances d'URL. Améliorations du modèle de permissions : le flag expérimental --experimental-permission devient --permission, signalant une stabilité accrue de cette fonctionnalité. Améliorations du test runner : les sous-tests sont désormais attendus automatiquement, simplifiant l'écriture des tests et réduisant les erreurs liées aux promesses non gérées. Intégration d'Undici 7 : amélioration des capacités du client HTTP avec de meilleures performances et un support étendu des fonctionnalités HTTP modernes. Dépréciations et suppressions : Dépréciation de url.parse() au profit de l'API WHATWG URL. Suppression de tls.createSecurePair. Dépréciation de SlowBuffer. Dépréciation de l'instanciation de REPL sans new. Dépréciation de l'utilisation des classes Zlib sans new. Dépréciation du passage de args à spawn et execFile dans child_process. Node.js 24 est actuellement la version “Current” et deviendra une version LTS en octobre 2025. Il est recommandé de tester cette version pour évaluer son impact sur vos applications. Data et Intelligence Artificielle Apprendre à coder reste crucial et l'IA est là pour venir en aide : https://kyrylo.org/software/2025/03/27/learn-to-code-ignore-ai-then-use-ai-to-code-even-better.html Apprendre à coder reste essentiel malgré l'IA. L'IA peut assister la programmation. Une solide base est cruciale pour comprendre et contrôler le code. Cela permet d'éviter la dépendance à l'IA. Cela réduit le risque de remplacement par des outils d'IA accessibles à tous. L'IA est un outil, pas un substitut à la maîtrise des fondamentaux. Super article de Anthropic qui essaie de comprendre comment fonctionne la “pensée” des LLMs https://www.anthropic.com/research/tracing-thoughts-language-model Effet boîte noire : Stratégies internes des IA (Claude) opaques aux développeurs et utilisateurs. Objectif : Comprendre le “raisonnement” interne pour vérifier capacités et intentions. Méthode : Inspiration neurosciences, développement d'un “microscope IA” (regarder quels circuits neuronaux s'activent). Technique : Identification de concepts (“features”) et de “circuits” internes. Multilinguisme : Indice d'un “langage de pensée” conceptuel commun à toutes les langues avant de traduire dans une langue particulière. Planification : Capacité à anticiper (ex: rimes en poésie), pas seulement de la génération mot par mot (token par token). Raisonnement non fidèle : Peut fabriquer des arguments plausibles (“bullshitting”) pour une conclusion donnée. Logique multi-étapes : Combine des faits distincts, ne se contente pas de mémoriser. Hallucinations : Refus par défaut ; réponse si “connaissance” active, sinon risque d'hallucination si erreur. “Jailbreaks” : Tension entre cohérence grammaticale (pousse à continuer) et sécurité (devrait refuser). Bilan : Méthodes limitées mais prometteuses pour la transparence et la fiabilité de l'IA. Le “S” dans MCP veut dire Securité (ou pas !) https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands-for-security-91407b33ed6b La spécification MCP pour permettre aux LLMs d'avoir accès à divers outils et fonctions a peut-être été adoptée un peu rapidement, alors qu'elle n'était pas encore prête niveau sécurité L'article liste 4 types d'attaques possibles : vulnérabilité d'injection de commandes attaque d'empoisonnement d'outils redéfinition silencieuse de l'outil le shadowing d'outils inter-serveurs Pour l'instant, MCP n'est pas sécurisé : Pas de standard d'authentification Pas de chiffrement de contexte Pas de vérification d'intégrité des outils Basé sur l'article de InvariantLabs https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks Sortie Infinispan 15.2 - pre rolling upgrades 16.0 https://infinispan.org/blog/2025/03/27/infinispan-15-2 Support de Redis JSON + scripts Lua Métriques JVM désactivables Nouvelle console (PatternFly 6) Docs améliorées (métriques + logs) JDK 17 min, support JDK 24 Fin du serveur natif (performances) Guillaume montre comment développer un serveur MCP HTTP Server Sent Events avec l'implémentation de référence Java et LangChain4j https://glaforge.dev/posts/2025/04/04/mcp-client-and-server-with-java-mcp-sdk-and-langchain4j/ Développé en Java, avec l'implémentation de référence qui est aussi à la base de l'implémentation dans Spring Boot (mais indépendant de Spring) Le serveur MCP est exposé sous forme de servlet dans Jetty Le client MCP lui, est développé avec le module MCP de LangChain4j c'est semi independant de Spring dans le sens où c'est dépendant de Reactor et de ses interface. il y a une conversation sur le github d'anthropic pour trouver une solution, mais cela ne parait pas simple. Les fallacies derrière la citation “AI won't replace you, but humans using AI will” https://platforms.substack.com/cp/161356485 La fallacie de l'automatisation vs. l'augmentation : Elle se concentre sur l'amélioration des tâches existantes avec l'IA au lieu de considérer le changement de la valeur de ces tâches dans un nouveau système. La fallacie des gains de productivité : L'augmentation de la productivité ne se traduit pas toujours par plus de valeur pour les travailleurs, car la valeur créée peut être capturée ailleurs dans le système. La fallacie des emplois statiques : Les emplois sont des constructions organisationnelles qui peuvent être redéfinies par l'IA, rendant les rôles traditionnels obsolètes. La fallacie de la compétition “moi vs. quelqu'un utilisant l'IA” : La concurrence évolue lorsque l'IA modifie les contraintes fondamentales d'un secteur, rendant les compétences existantes moins pertinentes. La fallacie de la continuité du flux de travail : L'IA peut entraîner une réimagination complète des flux de travail, éliminant le besoin de certaines compétences. La fallacie des outils neutres : Les outils d'IA ne sont pas neutres et peuvent redistribuer le pouvoir organisationnel en changeant la façon dont les décisions sont prises et exécutées. La fallacie du salaire stable : Le maintien d'un emploi ne garantit pas un salaire stable, car la valeur du travail peut diminuer avec l'augmentation des capacités de l'IA. La fallacie de l'entreprise stable : L'intégration de l'IA nécessite une restructuration de l'entreprise et ne se fait pas dans un vide organisationnel. Comprendre le “sampling” dans les LLMs https://rentry.co/samplers Explique pourquoi les LLMs utilisent des tokens Les différentes méthodes de “sampling” : càd de choix de tokens Les hyperparamètres comme la température, top-p, et leur influence réciproque Les algorithmes de tokenisation comme Byte Pair Encoding et SentencePiece. Un de moins … OpenAI va racheter Windsurf pour 3 milliards de dollars. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion l'accord n'est pas encore finalisé Windsurf était valorisé à 1,25 milliards l'an dernier et OpenAI a levé 40 milliards dernièrement portant sa valeur à 300 milliards Le but pour OpenAI est de rentrer dans le monde des assistants de code pour lesquels ils sont aujourd'hui absent Docker desktop se met à l'IA… ? Une nouvelle fonctionnalité dans docker desktop 4.4 sur macos: Docker Model Runner https://dev.to/docker/run-genai-models-locally-with-docker-model-runner-5elb Permet de faire tourner des modèles nativement en local ( https://docs.docker.com/model-runner/ ) mais aussi des serveurs MCP ( https://docs.docker.com/ai/mcp-catalog-and-toolkit/ ) Outillage Jetbrains défend la suppression des commentaires négatifs sur son assistant IA https://devclass.com/2025/04/30/jetbrains-defends-removal-of-negative-reviews-for-unpopular-ai-assistant/?td=rt-3a L'IA Assistant de JetBrains, lancée en juillet 2023, a été téléchargée plus de 22 millions de fois mais n'est notée que 2,3 sur 5. Des utilisateurs ont remarqué que certaines critiques négatives étaient supprimées, ce qui a provoqué une réaction négative sur les réseaux sociaux. Un employé de JetBrains a expliqué que les critiques ont été supprimées soit parce qu'elles mentionnaient des problèmes déjà résolus, soit parce qu'elles violaient leur politique concernant les “grossièretés, etc.” L'entreprise a reconnu qu'elle aurait pu mieux gérer la situation, un représentant déclarant : “Supprimer plusieurs critiques d'un coup sans préavis semblait suspect. Nous aurions dû au moins publier un avis et fournir plus de détails aux auteurs.” Parmi les problèmes de l'IA Assistant signalés par les utilisateurs figurent : un support limité pour les fournisseurs de modèles tiers, une latence notable, des ralentissements fréquents, des fonctionnalités principales verrouillées aux services cloud de JetBrains, une expérience utilisateur incohérente et une documentation insuffisante. Une plainte courante est que l'IA Assistant s'installe sans permission. Un utilisateur sur Reddit l'a qualifié de “plugin agaçant qui s'auto-répare/se réinstalle comme un phénix”. JetBrains a récemment introduit un niveau gratuit et un nouvel agent IA appelé Junie, destiné à fonctionner parallèlement à l'IA Assistant, probablement en réponse à la concurrence entre fournisseurs. Mais il est plus char a faire tourner. La société s'est engagée à explorer de nouvelles approches pour traiter les mises à jour majeures différemment et envisage d'implémenter des critiques par version ou de marquer les critiques comme “Résolues” avec des liens vers les problèmes correspondants au lieu de les supprimer. Contrairement à des concurrents comme Microsoft, AWS ou Google, JetBrains commercialise uniquement des outils et services de développement et ne dispose pas d'une activité cloud distincte sur laquelle s'appuyer. Vos images de README et fichiers Markdown compatibles pour le dark mode de GitHub: https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/ Seulement quelques lignes de pure HTML pour le faire Architecture Alors, les DTOs, c'est bien ou c'est pas bien ? https://codeopinion.com/dtos-mapping-the-good-the-bad-and-the-excessive/ Utilité des DTOs : Les DTOs servent à transférer des données entre les différentes couches d'une application, en mappant souvent les données entre différentes représentations (par exemple, entre la base de données et l'interface utilisateur). Surutilisation fréquente : L'article souligne que les DTOs sont souvent utilisés de manière excessive, notamment pour créer des API HTTP qui ne font que refléter les entités de la base de données, manquant ainsi l'opportunité de composer des données plus riches. Vraie valeur : La valeur réelle des DTOs réside dans la gestion du couplage entre les couches et la composition de données provenant de sources multiples en formes optimisées pour des cas d'utilisation spécifiques. Découplage : Il est suggéré d'utiliser les DTOs pour découpler les modèles de données internes des contrats externes (comme les API), ce qui permet une évolution et une gestion des versions indépendantes. Exemple avec CQRS : Dans le cadre de CQRS (Command Query Responsibility Segregation), les réponses aux requêtes (queries) agissent comme des DTOs spécifiquement adaptés aux besoins de l'interface utilisateur, pouvant inclure des données de diverses sources. Protection des données internes : Les DTOs aident à distinguer et protéger les modèles de données internes (privés) des changements externes (publics). Éviter l'excès : L'auteur met en garde contre les couches de mapping excessives (mapper un DTO vers un autre DTO) qui n'apportent pas de valeur ajoutée. Création ciblée : Il est conseillé de ne créer des DTOs que lorsqu'ils résolvent des problèmes concrets, tels que la gestion du couplage ou la facilitation de la composition de données. Méthodologies Même Guillaume se met au “vibe coding” https://glaforge.dev/posts/2025/05/02/vibe-coding-an-mcp-server-with-micronaut-and-gemini/ Selon Andrey Karpathy, c'est le fait de POC-er un proto, une appli jetable du weekend https://x.com/karpathy/status/1886192184808149383 Mais Simon Willison s'insurge que certains confondent coder avec l'assistance de l'IA avec le vibe coding https://simonwillison.net/2025/May/1/not-vibe-coding/ Guillaume c'est ici amusé à développer un serveur MCP avec Micronaut, en utilisant Gemini, l'IA de Google. Contrairement à Quarkus ou Spring Boot, Micronaut n'a pas encore de module ou de support spécifique pour faciliter la création de serveur MCP Sécurité Une faille de sécurité 10/10 sur Tomcat https://www.it-connect.fr/apache-tomcat-cette-faille-activement-exploitee-seulement-30-heures-apres-sa-divulgation-patchez/ Une faille de sécurité critique (CVE-2025-24813) affecte Apache Tomcat, permettant l'exécution de code à distance Cette vulnérabilité est activement exploitée seulement 30 heures après sa divulgation du 10 mars 2025 L'attaque ne nécessite aucune authentification et est particulièrement simple à exécuter Elle utilise une requête PUT avec une charge utile Java sérialisée encodée en base64, suivie d'une requête GET L'encodage en base64 permet de contourner la plupart des filtres de sécurité Les serveurs vulnérables utilisent un stockage de session basé sur des fichiers (configuration répandue) Les versions affectées sont : 11.0.0-M1 à 11.0.2, 10.1.0-M1 à 10.1.34, et 9.0.0.M1 à 9.0.98 Les mises à jour recommandées sont : 11.0.3+, 10.1.35+ et 9.0.99+ Les experts prévoient des attaques plus sophistiquées dans les prochaines phases d'exploitation (upload de config ou jsp) Sécurisation d'un serveur ssh https://ittavern.com/ssh-server-hardening/ un article qui liste les configurations clés pour sécuriser un serveur SSH par exemple, enlever password authentigfication, changer de port, desactiver le login root, forcer le protocol ssh 2, certains que je ne connaissais pas comme MaxStartups qui limite le nombre de connections non authentifiées concurrentes Port knocking est une technique utile mais demande une approche cliente consciente du protocol Oracle admet que les identités IAM de ses clients ont leaké https://www.theregister.com/2025/04/08/oracle_cloud_compromised/ Oracle a confirmé à certains clients que son cloud public a été compromis, alors que l'entreprise avait précédemment nié toute intrusion. Un pirate informatique a revendiqué avoir piraté deux serveurs d'authentification d'Oracle et volé environ six millions d'enregistrements, incluant des clés de sécurité privées, des identifiants chiffrés et des entrées LDAP. La faille exploitée serait la vulnérabilité CVE-2021-35587 dans Oracle Access Manager, qu'Oracle n'avait pas corrigée sur ses propres systèmes. Le pirate a créé un fichier texte début mars sur login.us2.oraclecloud.com contenant son adresse email pour prouver son accès. Selon Oracle, un ancien serveur contenant des données vieilles de huit ans aurait été compromis, mais un client affirme que des données de connexion aussi récentes que 2024 ont été dérobées. Oracle fait face à un procès au Texas concernant cette violation de données. Cette intrusion est distincte d'une autre attaque contre Oracle Health, sur laquelle l'entreprise refuse de commenter. Oracle pourrait faire face à des sanctions sous le RGPD européen qui exige la notification des parties affectées dans les 72 heures suivant la découverte d'une fuite de données. Le comportement d'Oracle consistant à nier puis à admettre discrètement l'intrusion est inhabituel en 2025 et pourrait mener à d'autres actions en justice collectives. Une GitHub action très populaire compromise https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised Compromission de l'action tj-actions/changed-files : En mars 2025, une action GitHub très utilisée (tj-actions/changed-files) a été compromise. Des versions modifiées de l'action ont exposé des secrets CI/CD dans les logs de build. Méthode d'attaque : Un PAT compromis a permis de rediriger plusieurs tags de version vers un commit contenant du code malveillant. Détails du code malveillant : Le code injecté exécutait une fonction Node.js encodée en base64, qui téléchargeait un script Python. Ce script parcourait la mémoire du runner GitHub à la recherche de secrets (tokens, clés…) et les exposait dans les logs. Dans certains cas, les données étaient aussi envoyées via une requête réseau. Période d'exposition : Les versions compromises étaient actives entre le 12 et le 15 mars 2025. Tout dépôt, particulièrement ceux publiques, ayant utilisé l'action pendant cette période doit être considéré comme potentiellement exposé. Détection : L'activité malveillante a été repérée par l'analyse des comportements inhabituels pendant l'exécution des workflows, comme des connexions réseau inattendues. Réaction : GitHub a supprimé l'action compromise, qui a ensuite été nettoyée. Impact potentiel : Tous les secrets apparaissant dans les logs doivent être considérés comme compromis, même dans les dépôts privés, et régénérés sans délai. Loi, société et organisation Les startup the YCombinateur ont les plus fortes croissances de leur histoire https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html Les entreprises en phase de démarrage à Silicon Valley connaissent une croissance significative grâce à l'intelligence artificielle. Le PDG de Y Combinator, Garry Tan, affirme que l'ensemble des startups de la dernière cohorte a connu une croissance hebdomadaire de 10% pendant neuf mois. L'IA permet aux développeurs d'automatiser des tâches répétitives et de générer du code grâce aux grands modèles de langage. Pour environ 25% des startups actuelles de YC, 95% de leur code a été écrit par l'IA. Cette révolution permet aux entreprises de se développer avec moins de personnel - certaines atteignant 10 millions de dollars de revenus avec moins de 10 employés. La mentalité de “croissance à tout prix” a été remplacée par un renouveau d'intérêt pour la rentabilité. Environ 80% des entreprises présentées lors du “demo day” étaient centrées sur l'IA, avec quelques startups en robotique et semi-conducteurs. Y Combinator investit 500 000 dollars dans les startups en échange d'une participation au capital, suivi d'un programme de trois mois. Red Hat middleware (ex-jboss) rejoint IBM https://markclittle.blogspot.com/2025/03/red-hat-middleware-moving-to-ibm.html Les activités Middleware de Red Hat (incluant JBoss, Quarkus, etc.) vont être transférées vers IBM, dans l'unité dédiée à la sécurité des données, à l'IAM et aux runtimes. Ce changement découle d'une décision stratégique de Red Hat de se concentrer davantage sur le cloud hybride et l'intelligence artificielle. Mark Little explique que ce transfert était devenu inévitable, Red Hat ayant réduit ses investissements dans le Middleware ces dernières années. L'intégration vise à renforcer l'innovation autour de Java en réunissant les efforts de Red Hat et IBM sur ce sujet. Les produits Middleware resteront open source et les clients continueront à bénéficier du support habituel sans changement. Mark Little affirme que des projets comme Quarkus continueront à être soutenus et que cette évolution est bénéfique pour la communauté Java. Un an de commonhaus https://www.commonhaus.org/activity/253.html un an, démarré sur les communautés qu'ils connaissaient bien maintenant 14 projets et put en accepter plus confiance, gouvernance legère et proteger le futur des projets automatisation de l'administratif, stabiilité sans complexité, les developpeurs au centre du processus de décision ils ont besoins de members et supporters (financiers) ils veulent accueillir des projets au delà de ceux du cercles des Java Champions Spring Cloud Data Flow devient un produit commercial et ne sera plus maintenu en open source https://spring.io/blog/2025/04/21/spring-cloud-data-flow-commercial Peut-être sous l'influence de Broadcom, Spring se met à mettre en mode propriétaire des composants du portefeuille Spring ils disent que peu de gens l'utilisaent en mode OSS et la majorité venait d'un usage dans la plateforme Tanzu Maintenir en open source le coutent du temps qu'ils son't pas sur ces projets. La CNCF protège le projet NATS, dans la fondation depuis 2018, vu que la société Synadia qui y contribue souhaitait reprendre le contrôle du projet https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-integrity-of-open-source-cncfs-commitment-to-the-community/ CNCF : Protège projets OS, gouvernance neutre. Synadia vs CNCF : Veut retirer NATS, licence non-OS (BUSL). CNCF : Accuse Synadia de “claw back” (reprise illégitime). Revendications Synadia : Domaine nats.io, orga GitHub. Marque NATS : Synadia n'a pas transféré (promesse rompue malgré aide CNCF). Contestation Synadia : Juge règles CNCF “trop vagues”. Vote interne : Mainteneurs Synadia votent sortie CNCF (sans communauté). Support CNCF : Investissement majeur ($ audits, légal), succès communautaire (>700 orgs). Avenir NATS (CNCF) : Maintien sous Apache 2.0, gouvernance ouverte. Actions CNCF : Health check, appel mainteneurs, annulation marque Synadia, rejet demandes. Mais finalement il semble y avoir un bon dénouement : https://www.cncf.io/announcements/2025/05/01/cncf-and-synadia-align-on-securing-the-future-of-the-nats-io-project/ Accord pour l'avenir de NATS.io : La Cloud Native Computing Foundation (CNCF) et Synadia ont conclu un accord pour sécuriser le futur du projet NATS.io. Transfert des marques NATS : Synadia va céder ses deux enregistrements de marque NATS à la Linux Foundation afin de renforcer la gouvernance ouverte du projet. Maintien au sein de la CNCF : L'infrastructure et les actifs du projet NATS resteront sous l'égide de la CNCF, garantissant ainsi sa stabilité à long terme et son développement en open source sous licence Apache-2.0. Reconnaissance et engagement : La Linux Foundation, par la voix de Todd Moore, reconnaît les contributions de Synadia et son soutien continu. Derek Collison, PDG de Synadia, réaffirme l'engagement de son entreprise envers NATS et la collaboration avec la Linux Foundation et la CNCF. Adoption et soutien communautaire : NATS est largement adopté et considéré comme une infrastructure critique. Il bénéficie d'un fort soutien de la communauté pour sa nature open source et l'implication continue de Synadia. Finalement, Redis revient vers une licence open source OSI, avec la AGPL https://foojay.io/today/redis-is-now-available-under-the-agplv3-open-source-license/ Redis passe à la licence open source AGPLv3 pour contrer l'exploitation par les fournisseurs cloud sans contribution. Le passage précédent à la licence SSPL avait nui à la relation avec la communauté open source. Salvatore Sanfilippo (antirez) est revenu chez Redis. Redis 8 adopte la licence AGPL, intègre les fonctionnalités de Redis Stack (JSON, Time Series, etc.) et introduit les “vector sets” (le support de calcul vectoriel développé par Salvatore). Ces changements visent à renforcer Redis en tant que plateforme appréciée des développeurs, conformément à la vision initiale de Salvatore. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12 juin 2025 : Positive Design Days - Strasbourg (France) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : Devfest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Sportbladet Daily
Skidstjärnans nej till landslaget: ”Borde kryddat erbjudandet”

Sportbladet Daily

Play Episode Listen Later May 6, 2025 23:24


Supertstjärnan Alvar Myhlback har tackat nej till svenska skidlandslaget och missar kommande OS i Milano, Italien. I vintras petades han från JVM och uttryckte då en irritation över förbundets beslut. Istället gjorde han succé som den yngsta segraren någonsin av Vasaloppet. Hur stort tapp är det här för Sveriges del? Varför är fortsatta satsningar på Vasaloppet så viktigt för honom? och hur infekterat är det mellan Myhlback och förbundet? Med: Anna Rydén Programledare: Maja Andersson

Inside Java
“Stream Gatherers” with Viktor Klang

Inside Java

Play Episode Listen Later May 4, 2025 32:57


In this episode, Ana is joined by Viktor Klang, core JDK architect and author of the Stream Gatherers JDK Enhancement Proposal, to dive into one of the standout features of JDK 24: the Gatherers API. Viktor explains how Gatherers extend the Java Stream API with custom intermediate operations, why they were added to the platform, and how they can enhance your day-to-day Java development. He also shares practical tips for using the Gatherers API effectively, along with insights into the design process and how community feedback plays a crucial role in shaping future JDK features.

Foojay.io, the Friends Of OpenJDK!
Celebrating 5 Years of Foojay! (#70)

Foojay.io, the Friends Of OpenJDK!

Play Episode Listen Later Apr 24, 2025 31:10


On April 25, 2020, Geertjan Wielenga published the first Foojay post. Yes, we are celebrating 5 years since the Friends Of OpenJDK website launch! Today, more than 1,600 posts are on the site, written by over 250 authors. And there is much more to discover within the Foojay world...In this podcast, we look at how Foojay started with founder Geertjan Wielenga. We'll also hear from Gerrit Grunwald about how Foojay's Disco API has become part of your daily work without you realizing it. We also have several of our regular authors and podcast guests who share how Foojay has influenced them (and vice versa).Thank you all for being part of the Foojay community, whether as a listener of this podcast, a visitor to the website, a user of the Disco API, or through any other touchpoint!00:00 Introduction00:58 Grace Jansen   https://foojay.io/today/author/grace-jansen 02:44 Geertjan Wielenga about the start and evolution of Foojay   https://foojay.io/today/author/geertjan-wielenga/     Foojay on Mastodon:       https://foojay.io/today/foojay-mastodon-service-here-it-is/    Java Quick Start Course on Foojay:       https://foojay.io/java-quick-start/    JDoodle on Foojay:       https://foojay.io/today/integrate-executable-java-code-in-your-blog-posts-part-2-how-to-use-dependencies/    Foojay Slack:      https://foojay.io/today/join-slack-com-t-foojay-signup/    Contribute to Foojay:       https://foojay.io/today/how-to-submit-your-next-article-on-foojay-io/ 12:24 Richard Fichtner   https://foojay.io/today/author/r-fichtner      Free JCon tickets:       https://pretix.eu/impuls/europe2025/redeem?voucher=FOOJAY-COMMUNITY 13:19 Mary Grygleski   https://foojay.io/today/author/mgrygles 15:01 Shai Almog   https://foojay.io/today/author/shai-almog 16:59 Gerrit Grunwald about the Disco API   https://foojay.io/today/author/gerrit-grunwald/    Disco API Blog:       https://foojay.io/today/disco-api-helping-you-to-find-any-openjdk-distribution/    Disco API Swagger UI:       https://api.foojay.io/swagger-ui 24:38 Simon Ritter   https://foojay.io/today/author/simonritter 25:10 Marit van Dijk   https://foojay.io/today/author/marit-van-dijk 25:47 Hanno Embregts   https://foojay.io/today/author/hanno-embregts 26:42 Bazlur Rahman   https://foojay.io/today/author/bazlur-rahman 29:10 Artur Skowroński   JVM weekly:       https://www.linkedin.com/newsletters/jvm-weekly-7097859802881540096 30:22 Conclusion and looking forward to 30 years of Java with James Gosling

airhacks.fm podcast with adam bien
Opensource and JVM Ports

airhacks.fm podcast with adam bien

Play Episode Listen Later Apr 21, 2025 60:01


An airhacks.fm conversation with Volker Simonis (@volker_simonis) about: discussion about carnivorous plants, explanation of how different carnivorous plants capture prey through movement, glue, or digestive fluids, Utricularia uses vacuum to catch prey underwater, SAP's interest in developing their own JVM around Java 1.4/1.5 era, challenges with SAP's NetWeaver Java EE stack, difficulties maintaining Java across multiple Unix platforms (HP-UX, AIX, S390, Solaris) with different vendor JVMs, SAP's decision to license Sun's HotSpot source code, porting Hotspot to PA-RISC architecture on HP-UX, explanation of C++ interpreter versus Template interpreter in Hotspot, challenges with platform-specific C++ compilers and assembler code, detailed explanation of JVM internals including deoptimization, inlining, and safe points, SAP's contributions to openJDK including PowerPC port, challenges getting SAP to embrace open source, delays caused by Oracle's acquisition of Sun, SAP's extensive JVM porting work across multiple platforms, development of SAP JVM with additional features like profiling safe points, creation of SAP Machine as an open-source OpenJDK distribution, explanation of Java certification and trademark restrictions, Hotspot Express model allowing newer VM components in older Java versions, Volker's move to Amazon Corretto team after 15 years at SAP, brief discussion of ABAP versus Java at SAP, Volker's recent interest in GraalVM and native image technologies Volker Simonis on twitter: @volker_simonis

airhacks.fm podcast with adam bien
From Predator Plants to Concordance with Java

airhacks.fm podcast with adam bien

Play Episode Listen Later Mar 23, 2025 64:15


An airhacks.fm conversation with Volker Simonis (@volker_simonis) about: early computing experiences with Schneider CPC (Amstrad in UK) with Z80 CPU, CP/M operating system as an add-on that provided a real file system, programming in Basic and Turbo Pascal on early computers, discussion about gaming versus programming interests, using a 9-pin needle printer for school work, programming on pocket computers with BASIC in school, memories of Digital Research's CP/M and DR-DOS competing with MS-DOS, HiMEM memory management in early operating systems, programming in Logo language with turtle graphics and fractals, fascination with Lindenmayer systems (L-systems) for simulating biological growth patterns, interest in biology and carnivorous plants, transition to PCs with floppy disk drives, using SGI Iris workstations at university with IRIX operating system, early experiences with Linux installed from floppy disks, challenges of configuring X Window System, programming graphics on interlaced monitors, early work with HP using Tickle/Tk and python around 1993, first experiences with Java around version 0.8/0.9, attraction to Java's platform-independent networking and graphics capabilities, using Blackdown Java for Linux created by Johan Vos, freelance work creating Java applets for accessing databases of technical standards, PhD work creating software for analyzing parallel text corpora in multiple languages, developing internationalization and XML capabilities in Java Swing applications, career at Sun Microsystems porting MaxDB to Solaris, transition to SAP to work on JVM development, Adabas and MaxDB, reflections on ABAP programming language at SAP and its database-centric nature Volker Simonis on twitter: @volker_simonis

Les Cast Codeurs Podcast
LCC 323 - L'accessibilité des messageries chiffrées

Les Cast Codeurs Podcast

Play Episode Listen Later Mar 17, 2025 70:33


Dans cet épisode, Emmanuel et Arnaud discutent des dernières nouvelles du dev, en mettant l'accent sur Java, l'intelligence artificielle, et les nouvelles fonctionnalités des versions JDK 24 et 25. Ils abordent également des sujets comme Quarkus, l'accessibilité des sites web, et l'impact de l'IA sur le trafic web. Cette conversation aborde les approches pour les devs en matière d'intelligence artificielle et de développement logiciel. On y discute notamment des défis et des bénéfices de l'utilisation de l'IA. Enfin, ils partagent leurs réflexions sur l'importance des conférences pour le développement professionnel. Enregistré le 14 mars 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-323.mp3 ou en vidéo sur YouTube. News Langages Java Metal https://www.youtube.com/watch?v=yup8gIXxWDU Peut-être qu'on la déjà partagé ? Article d'opinion Java coming for AI https://thenewstack.io/2025-is-the-last-year-of-python-dominance-in-ai-java-comin/ 2025 pourrait être la dernière année où Python domine l'IA. Java devient un concurrent sérieux dans le domaine. En 2024, Python était toujours en tête, Java restait fort en entreprise, et Rust gagnait en popularité. Java est de plus en plus utilisé pour l'AI remettant en cause la suprématie de Python. article vient de javaistes la domination de python est cluturelle et plus technique (enfin pour les ML lib c'est encore technique) projets paname et babylon changent la donne JavaML est populaire L'almanach java sur les versions https://javaalmanac.io/ montre kes APIs et les diff entre versions puis les notes ou la spec java Les nouvelles de JDK 24 et du futur 25 https://www.infoq.com/news/2025/02/java-24-so-far/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global JDK 24 a atteint sa première phase de release candidate et sera officiellement publié le 18 mars 2025. 24 nouvelles fonctionnalités (JEPs) réparties en 5 catégories : Core Java Library (7), Java Language Specification (4), Security Library (4), HotSpot (8) et Java Tools (1). Project Amber : JEP 495 “Simple Source Files and Instance Main Methods” en quatrième preview, visant à simplifier l'écriture des premiers programmes Java pour les débutants. Project Loom : JEP 487 “Scoped Values” en quatrième preview, permettant le partage de données immuables entre threads, particulièrement utile avec les virtual threads. Project Panama : JEP 489 “Vector API” en neuvième incubation, continuera d'incuber jusqu'à ce que les fonctionnalités nécessaires de Project Valhalla soient disponibles. Project Leyden : JEP 483 “Ahead-of-Time Class Loading & Linking” pour améliorer le temps de démarrage en rendant les classes d'une application instantanément disponibles au démarrage de la JVM. Sécurité quantique : Deux JEPs (496 et 497) introduisant des algorithmes résistants aux ordinateurs quantiques pour la cryptographie, basés sur les réseaux modulaires. Sécurité renforcée : JEP 486 propose de désactiver définitivement le Security Manager, tandis que JEP 478 introduit une API de dérivation de clés. Optimisations HotSpot : JEP 450 “Compact Object Headers” (expérimental) pour réduire la taille des en-têtes d'objets de 96-128 bits à 64 bits sur les architectures 64 bits. (a ne aps utiliser en prod!) Améliorations GC : JEP 404 “Generational Shenandoah” (expérimental) introduit un mode générationnel pour le Garbage Collector Shenandoah, tout en gardant le non generationel. Évolution des ports : Windows 32-bit x86 ca sent le sapin JEP 502 dans JDK 25 : Introduction des “Stable Values” (preview), anciennement “Computed Constants”, offrant les avantages des champs final avec plus de flexibilité pour l'initialisation. Points Supplémentaires sur JDK 25 Date de sortie : JDK 25 est prévu pour septembre 2025 et représentera la prochaine version LTS (Long-Term Support) après JDK 21. Finalisation de l'on-ramp : Gavin Bierman a annoncé son intention de finaliser la fonction “Simple Source Files” dans JDK 25, après quatre previews successives. CDS Object Streaming : Le JEP Draft 8326035 propose d'ajouter un mécanisme d'archivage d'objets pour Class-Data Sharing (CDS) dans ZGC, avec un format d'archivage et un chargeur unifiés. HTTP/3 supporté dans HttpClient Un article sur l'approche de Go pour éviter les attaques par chemin de fichier https://go.dev/blog/osroot Librairies Quarkus 3.19 es sorti https://quarkus.io/blog/quarkus-3-19-1-released/ UBI 9 par defaut pour les containers En plus de AppCDS, support tu cache AOT (JEP 483) pour demarrer encore plus rapidement Preuve de possession dans OAuth tokers 2 Mario Fusco sur les patterns d'agents en Quarkus https://quarkus.io/blog/agentic-ai-with-quarkus/ et https://quarkus.io/blog/agentic-ai-with-quarkus-p2/ premier article sur les patterns de workflow chainer, paralleliser ou router avec des exemples de code qui tournent les agents a proprement parler (le LLM qui decide du workflow) les agents ont des toolbox que le LLM peut decided d'invoquer Le code va dans les details et permet de mettre les interactions en lumiere tracing rend les choses visuelles Web Le European Accessibility Act (EAA) https://martijnhols.nl/blog/the-european-accessibility-act-for-websites-and-apps Loi européenne sur l'accessibilité (EAA) adoptée en 2019 Vise à rendre sites web et apps accessibles aux personnes handicapées Suivre les normes WCAG 2.1 AA (clarté, utilisabilité, compatibilité) Entreprises concernées : banques, e-commerce, transports, etc. Date limite de mise en conformité : 28 juin 2025 2025 c'est pour les nouveaux developpements 2027 c'est pour les applications existantes. bon et je fais comment pour savoir si le site web des cast codeurs est conforme ? API Popover https://web.dev/blog/popover-baseline?hl=en L'API Popover est maintenant disponible dans tous les navigateurs majeurs Ajoutée à Baseline le 27 janvier 2025 Permet de créer des popovers natifs en HTML, sans JavaScript complexe Exemple : Ouvrir Contenu du popover Problème initial (2024) : Bug sur iOS empêchant la fermeture des popovers Intégrer un front-end React dans une app Spring-Boot https://bootify.io/frontend/react-spring-boot-integration.html Etape par etape, comment configurer son build (https://bootify.io/frontend/webpack-spring-boot.html) et son app (controllers…) pour y intégrer un front en rect. Data et Intelligence Artificielle Traffic des sites web venant de IA https://ahrefs.com/blog/ai-traffic-study/ le AIEO apres le SEO va devenir un gros business vu que les modèles ont tendance a avoir leurs chouchous techniques ou de reference. 63% des sites ont au moins un referal viennent d'une IA 50% ChatGPT, puis plrplexity et enfin Gemini, bah et LeChat alors? 0,17% du traffic des sites vient de l'IA. Et en meme temps l'AI resume plutot que pointe donc c'est logique Granite 3.2 est sorti https://www.infoq.com/news/2025/03/ibm-granite-3-2/ IBM sort Granite 3.2, un modèle IA avancé. Meilleur raisonnement et nouvelles capacités multimodales. Granite Vision 3.2 excelle en compréhension d'images et de documents. Granite Guardian 3.2 détecte les risques dans les réponses IA. Modèles plus petits et efficaces pour divers usages. Améliorations en raisonnement mathématique et prévisions temporelles. les trucs interessants de Granite c'est sa petite taille et son cote “vraiment” open source Prompt Engineering - article détaillé https://www.infoq.com/articles/prompt-engineering/ Le prompt engineering, c'est l'art de bien formuler les instructions pour guider l'IA. Accessible à tous, il ne remplace pas la programmation mais la complète. Techniques clés : few-shot learning, chain-of-thought, tree-of-thought. Avantages : flexibilité, rapidité, meilleure interaction avec l'IA. Limites : manque de précision et dépendance aux modèles existants. Futur : un outil clé pour améliorer l'IA et le développement logiciel. QCon San Francisco - Les agents AI - Conference https://www.infoq.com/presentations/ai-agents-infrastructure/ Sujet : Infrastructure pour agents d'IA. Technologies : RAG et bases de données vectorielles. Rôle des agents d'IA : Automatiser des tâches, prévoir des besoins, superviser. Expérience : Shruti Bhat de Oracle à Rockset (acquis par OpenAI). Objectif : Passer des applis classiques aux agents IA intelligents. Défis : Améliorer la recherche en temps réel, l'indexation et la récupération. Nous concernant: Évolution des rôles : Les développeurs passent à des rôles plus stratégiques. Adaptation nécessaire : Les développeurs doivent s'adapter aux nouvelles technologies. Official Java SDK for MCP & Spring AI https://spring.io/blog/2025/02/14/mcp-java-sdk-released-2 Désormais une implémentation officielle aux côtés des SDK Python, TypeScript et Kotlin. ( https://modelcontextprotocol.io/ ) Prise en charge de Stdio-based transport, SSE (via HTTP) et intégration avec Spring WebFlux et WebMVC. Intégration avec Spring AI, configuration simplifiée pour les applications Spring Boot (different starters disponibles) Codez avec Claude https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview Claude Code est en beta, plus de liste d'attente Un outil de codage agentique intégré au terminal, capable de comprendre votre base de code et d'accélérer le développement grâce à des commandes en langage naturel. Les fonctionnalités permettent de comprendre le code, le refactorer, tester, debugger, … Gemini Code Assist est gratuit https://blog.google/technology/developers/gemini-code-assist-free/ Pour un usage personnel. Pas besoin de compte. Pas de limite. 128k token input. Guillaume démarre une série d'articles sur le RAG (niveau avancé). Le premier sur Sentence Window Retrievalhttps://glaforge.dev/posts/2025/02/25/advanced-rag-sentence-window-retrieval/ Guillaume propose une technique qui améliore les résultats de rechercher de Retrieval Augmented Generation L'idée est de calculer des vecteurs embeddings sur des phrases, par exemple, mais de retourner un contexte plus large L'intérêt, c'est d'avoir des calculs de similarité de vector embedding qui ont de bons scores (sans dilution de sens) de similarité, mais de ne pas perdre des informations sur le contexte dans lequel cette phrase se situe GitHub Copilot edits en GA, GitHub Copilot en mode agent dans VSCode Insiders https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/ Copilot Edits permet via le chat de modifier plusieurs fichiers en même temps, ce qui simplifie les refactoring Copilot en mode agent ajoute un mode autonome (Agentic AI) qui va tout seul chercher les modifications à faire dans votre code base. “what could possibly go wrong?” Méthodologies Article d'opinion interessant sur AI et le code assistant de Addy Osmani https://addyo.substack.com/p/the-70-problem-hard-truths-about Un article de l'année dernière de Addy Osmani https://addyo.substack.com/p/10-lessons-from-12-years-at-google plusieurs types d'aide IA Ceux pour boostrapper, dun figma ou d'une image et avoir un proto non fonctionnel en quelques jours Ceux pour iterer sur du code donc plus long terme on va faire une interview sur les assistants de code IA Le cout de la vitesse de l'ia les dev senior refactur et modifie le code proposé pour se l'approprier, chnger l'architecture etc donc basé sur leur connaissance appliquer ce qu'on connait deja amis plus vite est un pattern different d'apprendre avec l'IA explore des patterns d'approche et la prospective sur le futur Loi, société et organisation Elon Musk essaie d'acheter Open AI https://www.bbc.com/news/articles/cpdx75zgg88o La réponse: “non merci mais on peut racheter twiter pour 9,74 milliars si tu veux” Avec la loi narcotrafic votée au sénat, Signal ne serait plus disponible en France https://www.clubic.com/actualite-555135-avec-la-loi-narcotrafic-signal-quittera-la-france.html en plus de légaliser les logiciels espions s'appuyant sur les failles logiciel La loi demande aux messageries de laisser l'état accéder aux conversations Donc une backdoor avec une clé etatique par exemple Une backdoor comme celle des téléphones filaires américains mis en place il y a des années et maintenant exploitée par l'espionnage chinois Signal à une position ferme, soit c'est sécurisé soit on sort d'un pays Olvid WhatsApp et iMessage sont aussi visée par exemple La loi défini la cible comme la criminalité organisée : les classiques mais aussi les gilets jaunes, les opposants au projet de Bure, les militants aidant les personnes exilées à Briançon, ou encore les actions contre le cimentier Lafarge à Bouc-Bel-Air et à Évreux Donc plus large que ce que les gens pensent. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19-21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27-28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 28-30 mars 2025 : Shift - Nantes (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 4 avril 2025 : aMP Orléans 04-04-2025 - Orléans (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 11-12 avril 2025 : Faiseuses du Web 4 - Dinan (France) 14 avril 2025 : Lyon Craft - Lyon (France) 16-18 avril 2025 : Devoxx France - Paris (France) 23-25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day - Strasbourg 2025 - Strasbourg (France) 29-30 avril 2025 : MixIT - Lyon (France) 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Foojay.io, the Friends Of OpenJDK!
Welcome to OpenJDK (Java) 24 (#68)

Foojay.io, the Friends Of OpenJDK!

Play Episode Listen Later Mar 15, 2025 54:53


We serve you a podcast about the new Java version every six months.Our regular guest, Simon Ritter, Deputy CTO of Azul, is known on social media as "speakjava." He is part of the OpenJDK vulnerability group, JCP executive committee, and expert group for the Java SE specification request so that he can share a lot of inside information with us. In this episode, we are joined by Hanno Embregts, a Java Developer by day and musician by night. He publishes a post on Foojay with all the details of every new Java release and prepared a long description of all the new features included in Java 24.  Let's see what this new release brings us...Guests   Simon Ritter      https://www.linkedin.com/in/siritter/       https://bsky.app/profile/speakjava.bsky.social    Hanno Embregts      https://www.linkedin.com/in/hannotify/       https://bsky.app/profile/hanno.codes Content00:00 Introduction of the topic and guests00:58 Why 24 JEPs in release 24?02:16 Overview of the changes in Java 2403:37 The changes in Hotspot and GC   JEP 404: Generational Shenandoah (Experimental)      https://openjdk.org/jeps/404    JEP 450: Compact Object Headers (Experimental)      https://openjdk.org/jeps/450    JEP 475: Late Barrier Expansion for G1      https://openjdk.org/jeps/475 04:46 JEP 483: Ahead-of-Time Class Loading & Linking      https://openjdk.org/jeps/483 07:30 JEP 491: Synchronize Virtual Threads without Pinning      https://openjdk.org/jeps/491 10:27 Security JEPs and Quantum resistance   JEP 478: Key Derivation Function API (Preview)      https://openjdk.org/jeps/478    JEP 496: Quantum-Resistant Module-Lattice-Based Key Encapsulation Mechanism      https://openjdk.org/jeps/496    JEP 497: Quantum-Resistant Module-Lattice-Based Digital Signature Algorithm      https://openjdk.org/jeps/497 13:00 Tools   JEP 493: Linking Run-Time Images without JMODs      https://openjdk.org/jeps/493 16:47 Repreviews and finalizations   JEP 489: Vector API (Ninth Incubator)      https://openjdk.org/jeps/489 18:27 JEP 484: Class-File API      https://openjdk.org/jeps/484 19:13 JEP 485: Stream Gatherers      https://openjdk.org/jeps/485 21:22 JEP 487: Scoped Values (Fourth Preview)      https://openjdk.org/jeps/487 22:15 JEP 488: Primitive Types in Patterns, instanceof, and switch (Second Preview)      https://openjdk.org/jeps/488 22:30 How JEPs get finalized and included23:44 JEP 492: Flexible Constructor Bodies (Third Preview)      https://openjdk.org/jeps/492 24:09 JEP 494: Module Import Declarations (Second Preview)      https://openjdk.org/jeps/494 25:07 JEP 495: Simple Source Files and Instance Main Methods (Fourth Preview)      https://openjdk.org/jeps/495 29:24 JEP 499: Structured Concurrency (Fourth Preview)      https://openjdk.org/jeps/499 34:04 Deprecations & Restrictions34:46 JEP 472: Prepare to Restrict the Use of JNI      https://openjdk.org/jeps/472 37:15 JEP 486: Permanently Disable the Security Manager      https://openjdk.org/jeps/486 38:53 JEP 490: ZGC: Remove the Non-Generational Mode      https://openjdk.org/jeps/490    Trash Talk - Exploring the JVM memory management by Gerrit Grunwald      https://www.youtube.com/watch?v=Jh79ojcror0 42:09 JEP 498: Warn upon Use of Memory-Access Methods in sun.misc.Unsafe      https://openjdk.org/jeps/498 45:43 Removal of 32-bit support   JEP 479: Remove the Windows 32-bit x86 Port      https://openjdk.org/jeps/479    JEP 501: Deprecate the 32-bit x86 Port for Removal      https://openjdk.org/jeps/501 47:37 Should we use Java 24 in production?51:09 Looking forward to the next LTS in September54:14 Conclusion

The Ride with JMV Podcast
Full Show: Live From Tom's Watch Bar as Hoosiers Director David Anspaugh, Comedian Jim Jefferies and More Join!

The Ride with JMV Podcast

Play Episode Listen Later Feb 28, 2025 154:58


00:00 – 27:43 – JMV is LIVE at Tom’s Watch Bar, and he kicks things off by previewing Purdue vs UCLA, as well as the Pacers vs Heat. He talks IU basketball, and how they have a chance to sneak into the tournament while still getting the desired change in leadership. 27:44 – 41:40 – Coach Bob Lovell from the legendary Indiana Sports Talk joins the show! Coach Lovell and JMV talk about high school basketball around the state of Indiana, and preview the weekend of basketball to come! 41:41 – 44:37 – JMV wraps up the 1st hour of the show. 44:38 – 1:09:15 – Mike Wells from ESPN Radio joins JMV! JMV and Mike first discuss the incident between NFL insiders Ian Rapaport and Jordan Schultz. They also talk about the NFL Combine, as well as Chris Ballard’s comments to the media. 1:09:16 – 1:29:27 – David Anspaugh, the director of the classic film Hoosiers, joins the show to discuss the passing of the film’s star, legendary actor Gene Hackman. 1:29:28 – 1:30:54 – JVM wraps up the 2nd hour! 1:30:55 – 1:54:47 – Voice of the Indiana Hoosiers Don Fischer joins the show! Don and JMV discuss the Hoosiers NCAA Tournament chances, the team’s two-game win streak, as well as their upcoming matchup with Washington. 1:54:48 – 2:10:17 – Comedian Jim Jefferies joins the show! JMV and Jim discuss what the touring life is like, and what projects he has coming up! 2:10:18 – 2:12:53 – JMV wraps up the 3rd hour! 2:12:54 – 2:34:58 – JMV talks to Jeffrey Gorman about the NFL Combine as the show draws to the close! Support the show: https://1075thefan.com/the-ride-with-jmv/See omnystudio.com/listener for privacy information.

javaswag
#76 - Сергей Куксенко - перформанс Джавы

javaswag

Play Episode Listen Later Feb 24, 2025 125:56


В 76 выпуске подкаста Javaswag поговорили Сергеем Куксенко о перформансе Джавы 00:00 Начало 04:39 Опыт работы в Excelsior и Java 10:47 Переход в Intel и работа с компиляторами 15:13 Работа в команде производительности Oracle 20:06 Развитие инфраструктуры производительности Java 26:01 Регрессии 32:09 Тестирование Java 39:50 Сборка эффективного корпуса бенчмарков 44:58 Вопросы безопасности и производительности 52:12 Асинхронное программирование и проект Loom 57:34 Влияние асинхронных фреймворков на производительность 01:03:08 Теория очередей и производительность системы 01:09:42 Преимущества и недостатки Loom 01:15:53 Преимущества проекта Loom и его влияние на код 01:24:43 Полугодовые релизы и их влияние на разработку 01:29:24 Кто такой хороший перформанс-инженер? 01:36:32 Что почитать 01:39:40 Публичные выступления 01:45:06 Непопулярное мнение о разработчиках 01:50:10 Kotlin и Java 01:58:10 GraalVM 02:00:00 Непопулярные мнения о методологиях Agile 02:03:01 Open Space Гость https://www.linkedin.com/in/skuksenko/ Ссылки: https://www.p99conf.io/session/why-user-mode-threads-are-good-for-performance/ https://openjdk.org/jeps/491 Ссылки на подкаст: Сайт - https://javaswag.github.io/ Телеграм - https://t.me/javaswag Youtube - https://www.youtube.com/@javaswag Linkedin - https://www.linkedin.com/in/volyihin/ X - https://x.com/javaswagpodcast

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for February 11th, 2025 - Episode 228

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Feb 11, 2025 57:05


2025-02-11 Weekly News — Episode 228Watch the video version on YouTube at https://youtube.com/live/-08ciY2kW4c?feature=share  Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsBrad Wood - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 61 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsOrtus announce BoxLang to the Java Community and JfokusJfokus has been the birth of BoxLang for the Java community. So Incredibly well received. We even had folks coding on their phones on https://try.boxlang.io for some sweet hoodies. What an amazing event. So much great feedback and amazing response to finally having momentum in the dynamic JVM space. We will definitely be back in 2026 In force. hashtag#boxlang hashtag#jfokus hashtag#dynamicJVM hashtag#communityhttps://www.linkedin.com/posts/lmajano_boxlang-jfokus-dynamicjvm-activity-7292961359632240640-N1nc?Get a Free BoxLang+ License with Your ITB 2025 Ticket!At Ortus Solutions, we are dedicated to delivering the best experience for our Into the Box attendees. This year's event will be an exciting opportunity to explore BoxLang and modern CFML development, and we want to ensure that attending in person is even more rewarding.Exclusive On-Site Attendee Benefit: Free 1-Year BoxLang+ License!As a special incentive, all on-site attendees will receive a free 1-year subscription to BoxLang+.BoxLang+ is a professional subscription that enhances development across multiple runtimes, including CLI, web, CommandBox, and serverless environments.https://www.ortussolutions.com/blog/get-a-free-boxlang-license-with-your-itb-2025-ticket Adobe ColdFusion Summit 2025 Adobe ColdFusion Summit 2025 is here—join us in Las Vegas on Sept 22-23 (optional certification days on Sept 21 or 24). Grab your early bird tickets for just $99 before they're gone. Secure your spot today!Register now: https://bit.ly/414pLF6Team Plans and Exclusive Deals: Into the Box 2025!Thinking about attending Into the Box 2025 but don't want to go alone? Or are you looking to train your team with the latest modern software development tools? We've got you covered. Take advantage of our exclusive team deals and bring your team for an even better experience.Get 50% off your second Into the Box on-site ticket.Buy 2, Get 1 Free – Purchase two on-site tickets, and the third one is on us.https://www.ortussolutions.com/blog/team-plans-and-exclusive-deals-into-the-box-2025 TeraTech release Free Online Course for Modernizing CF AppsA Call to the #ColdFusion Keepers of Middleware-Earth! ⚔️

The Irish Tech News Podcast
Insights into the Azul State of Java Report Simon Ritter, Deputy CTO, Azul

The Irish Tech News Podcast

Play Episode Listen Later Feb 11, 2025 28:46


The recent Azul State of Java 2025 survey offers a timely window into the use of Java and why it is still so popular. To find out more about the survey and its findings Ronan spoke to Simon Ritter, Deputy. CTO, Azul.Simon talks about what Azul does, the benefits of Java, the Azul State of Java Report, security and more.More about Simon Ritter:Simon Ritter is the Deputy CTO of Azul. Simon joined Sun Microsystems in 1996 and spent time working in both Java development and consultancy. He has been presenting Java technologies to developers since 1999, focusing on the core Java platform as well as client and embedded applications.At Azul, he continues to help people understand Java and Azul's JVM products. Simon is a Java Champion and two-time recipient of the JavaOne Rockstar award. In addition, he represents Azul on the JCP Executive Committee, the OpenJDK Vulnerability Group, and the JSR Expert Group since Java SE 9.

Irish Tech News Audio Articles
Insights into the Azul State of Java Report Simon Ritter, Deputy CTO, Azul

Irish Tech News Audio Articles

Play Episode Listen Later Feb 11, 2025 1:15


The recent Azul State of Java 2025 survey offers a timely window into the use of Java and why it is still so popular. To find out more about the survey and its findings Ronan spoke to Simon Ritter, deputy. CTO, Azul. Simon talks about what Azul does, the benefits of Java, the Azul State of Java Report, security and more. More about Simon Ritter: Simon Ritter is the Deputy CTO of Azul. Simon joined Sun Microsystems in 1996 and spent time working in both Java development and consultancy. He has been presenting Java technologies to developers since 1999, focusing on the core Java platform as well as client and embedded applications. At Azul, he continues to help people understand Java and Azul's JVM products. Simon is a Java Champion and two-time recipient of the JavaOne Rockstar award. In addition, he represents Azul on the JCP Executive Committee, the OpenJDK Vulnerability Group, and the JSR Expert Group since Java SE 9. See more stories here.

Les Cast Codeurs Podcast
LCC 322 - Maaaaveeeeen 4 !

Les Cast Codeurs Podcast

Play Episode Listen Later Feb 9, 2025 77:13


Arnaud et Emmanuel discutent des nouvelles de ce mois. On y parle intégrité de JVM, fetch size de JDBC, MCP, de prompt engineering, de DeepSeek bien sûr mais aussi de Maven 4 et des proxy de répository Maven. Et d'autres choses encore, bonne lecture. Enregistré le 7 février 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-322.mp3 ou en vidéo sur YouTube. News Langages Les evolutions de la JVM pour augmenter l'intégrité https://inside.java/2025/01/03/evolving-default-integrity/ un article sur les raisons pour lesquelles les editeurs de frameworks et les utilisateurs s'arrachent les cheveux et vont continuer garantir l'integrite du code et des données en enlevant des APIs existantes historiquemnt agents dynamiques, setAccessible, Unsafe, JNI Article expliques les risques percus par les mainteneurs de la JVM Franchement c'est un peu leg sur les causes l'article, auto propagande JavaScript Temporal, enfin une API propre et moderne pour gérer les dates en JS https://developer.mozilla.org/en-US/blog/javascript-temporal-is-coming/ JavaScript Temporal est un nouvel objet conçu pour remplacer l'objet Date, qui présente des défauts. Il résout des problèmes tels que le manque de prise en charge des fuseaux horaires et la mutabilité. Temporal introduit des concepts tels que les instants, les heures civiles et les durées. Il fournit des classes pour gérer diverses représentations de date/heure, y compris celles qui tiennent compte du fuseau horaire et celles qui n'en tiennent pas compte. Temporal simplifie l'utilisation de différents calendriers (par exemple, chinois, hébreu). Il comprend des méthodes pour les comparaisons, les conversions et le formatage des dates et des heures. La prise en charge par les navigateurs est expérimentale, Firefox Nightly ayant l'implémentation la plus aboutie. Un polyfill est disponible pour essayer Temporal dans n'importe quel navigateur. Librairies Un article sur les fetch size du JDBC et les impacts sur vos applications https://in.relation.to/2025/01/24/jdbc-fetch-size/ qui connait la valeur fetch size par default de son driver? en fonction de vos use cases, ca peut etre devastateur exemple d'une appli qui retourne 12 lignes et un fetch size de oracle a 10, 2 a/r pour rien et si c'est 50 lignres retournées la base de donnée est le facteur limitant, pas Java donc monter sont fetch size est avantageux, on utilise la memoire de Java pour eviter la latence Quarkus annouce les MCP servers project pour collecter les servier MCP en Java https://quarkus.io/blog/introducing-mcp-servers/ MCP d'Anthropic introspecteur de bases JDBC lecteur de filke system Dessine en Java FX demarrables facilement avec jbang et testes avec claude desktop, goose et mcp-cli permet d'utliser le pouvoir des librarires Java de votre IA d'ailleurs Spring a la version 0.6 de leur support MCP https://spring.io/blog/2025/01/23/spring-ai-mcp-0 Infrastructure Apache Flink sur Kibernetes https://www.decodable.co/blog/get-running-with-apache-flink-on-kubernetes-2 un article tres complet ejn deux parties sur l'installation de Flink sur Kubernetes installation, setup mais aussi le checkpointing, la HA, l'observablité Data et Intelligence Artificielle 10 techniques de prompt engineering https://medium.com/google-cloud/10-prompt-engineering-techniques-every-beginner-should-know-bf6c195916c7 Si vous voulez aller plus loin, l'article référence un très bon livre blanc sur le prompt engineering https://www.kaggle.com/whitepaper-prompt-engineering Les techniques évoquées : Zero-Shot Prompting: On demande directement à l'IA de répondre à une question sans lui fournir d'exemple préalable. C'est comme si on posait une question à une personne sans lui donner de contexte. Few-Shot Prompting: On donne à l'IA un ou plusieurs exemples de la tâche qu'on souhaite qu'elle accomplisse. C'est comme montrer à quelqu'un comment faire quelque chose avant de lui demander de le faire. System Prompting: On définit le contexte général et le but de la tâche pour l'IA. C'est comme donner à l'IA des instructions générales sur ce qu'elle doit faire. Role Prompting: On attribue un rôle spécifique à l'IA (enseignant, journaliste, etc.). C'est comme demander à quelqu'un de jouer un rôle spécifique. Contextual Prompting: On fournit des informations supplémentaires ou un contexte pour la tâche. C'est comme donner à quelqu'un toutes les informations nécessaires pour répondre à une question. Step-Back Prompting: On pose d'abord une question générale, puis on utilise la réponse pour poser une question plus spécifique. C'est comme poser une question ouverte avant de poser une question plus fermée. Chain-of-Thought Prompting: On demande à l'IA de montrer étape par étape comment elle arrive à sa conclusion. C'est comme demander à quelqu'un d'expliquer son raisonnement. Self-Consistency Prompting: On pose plusieurs fois la même question à l'IA et on compare les réponses pour trouver la plus cohérente. C'est comme vérifier une réponse en la posant sous différentes formes. Tree-of-Thoughts Prompting: On permet à l'IA d'explorer plusieurs chemins de raisonnement en même temps. C'est comme considérer toutes les options possibles avant de prendre une décision. ReAct Prompting: On permet à l'IA d'interagir avec des outils externes pour résoudre des problèmes complexes. C'est comme donner à quelqu'un les outils nécessaires pour résoudre un problème. Les patterns GenAI the thoughtworks https://martinfowler.com/articles/gen-ai-patterns/ tres introductif et pre RAG le direct prompt qui est un appel direct au LLM: limitations de connaissance et de controle de l'experience eval: evaluer la sortie d'un LLM avec plusieurs techniques mais fondamentalement une fonction qui prend la demande, la reponse et donc un score numerique evaluation via un LLM (le meme ou un autre), ou evaluation humaine tourner les evaluations a partir de la chaine de build amis aussi en live vu que les LLMs puvent evoluer. Decrit les embedding notament d'image amis aussi de texte avec la notion de contexte DeepSeek et la fin de la domination de NVidia https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvda un article sur les raisons pour lesquelles NVIDIA va se faire cahllengert sur ses marges 90% de marge quand meme parce que les plus gros GPU et CUDA qui est proprio mais des approches ardware alternatives existent qui sont plus efficientes (TPU et gros waffle) Google, MS et d'autres construisent leurs GPU alternatifs CUDA devient de moins en moins le linga franca avec l'investissement sur des langages intermediares alternatifs par Apple, Google OpenAI etc L'article parle de DeepSkeek qui est venu mettre une baffe dans le monde des LLMs Ils ont construit un competiteur a gpt4o et o1 avec 5M de dollars et des capacites de raisonnements impressionnant la cles c'etait beaucoup de trick d'optimisation mais le plus gros est d'avoir des poids de neurores sur 8 bits vs 32 pour les autres. et donc de quatizer au fil de l'eau et au moment de l'entrainement beaucoup de reinforcemnt learning innovatifs aussi et des Mixture of Expert donc ~50x moins chers que OpenAI Donc plus besoin de GPU qui on des tonnes de vRAM ah et DeepSeek est open source un article de semianalytics change un peu le narratif le papier de DeepSkeek en dit long via ses omissions par ensemple les 6M c'est juste l'inference en GPU, pas les couts de recherches et divers trials et erreurs en comparaison Claude Sonnet a coute 10M en infererence DeepSeek a beaucoup de CPU pre ban et ceratins post bans evalués a 5 Milliards en investissement. leurs avancées et leur ouverture reste extremement interessante Une intro à Apache Iceberg http://blog.ippon.fr/2025/01/17/la-revolution-des-donnees-lavenement-des-lakehouses-avec-apache-iceberg/ issue des limites du data lake. non structuré et des Data Warehouses aux limites en diversite de données et de volume entrent les lakehouse Et particulierement Apache Iceberg issue de Netflix gestion de schema mais flexible notion de copy en write vs merge on read en fonction de besoins garantie atomicite, coherence, isoliation et durabilite notion de time travel et rollback partitions cachées (qui abstraient la partition et ses transfos) et evolution de partitions compatbile avec les moteurs de calcul comme spark, trino, flink etc explique la structure des metadonnées et des données Guillaume s'amuse à générer des histoires courtes de Science-Fiction en programmant des Agents IA avec LangChain4j et aussi avec des workflows https://glaforge.dev/posts/2025/01/27/an-ai-agent-to-generate-short-scifi-stories/ https://glaforge.dev/posts/2025/01/31/a-genai-agent-with-a-real-workflow/ Création d'un générateur automatisé de nouvelles de science-fiction à l'aide de Gemini et Imagen en Java, LangChain4j, sur Google Cloud. Le système génère chaque nuit des histoires, complétées par des illustrations créées par le modèle Imagen 3, et les publie sur un site Web. Une étape d'auto-réflexion utilise Gemini pour sélectionner la meilleure image pour chaque chapitre. L'agent utilise un workflow explicite, drivé par le code Java, où les étapes sont prédéfinies dans le code, plutôt que de s'appuyer sur une planification basée sur LLM. Le code est disponible sur GitHub et l'application est déployée sur Google Cloud. L'article oppose les agents de workflow explicites aux agents autonomes, en soulignant les compromis de chaque approche. Car parfois, les Agent IA autonomes qui gèrent leur propre planning hallucinent un peu trop et n'établissent pas un plan correctement, ou ne le suive pas comme il faut, voire hallucine des “function call”. Le projet utilise Cloud Build, le Cloud Run jobs, Cloud Scheduler, Firestore comme base de données, et Firebase pour le déploiement et l'automatisation du frontend. Dans le deuxième article, L'approche est différente, Guillaume utilise un outil de Workflow, plutôt que de diriger le planning avec du code Java. L'approche impérative utilise du code Java explicite pour orchestrer le workflow, offrant ainsi un contrôle et une parallélisation précis. L'approche déclarative utilise un fichier YAML pour définir le workflow, en spécifiant les étapes, les entrées, les sorties et l'ordre d'exécution. Le workflow comprend les étapes permettant de générer une histoire avec Gemini 2, de créer une invite d'image, de générer des images avec Imagen 3 et d'enregistrer le résultat dans Cloud Firestore (base de donnée NoSQL). Les principaux avantages de l'approche impérative sont un contrôle précis, une parallélisation explicite et des outils de programmation familiers. Les principaux avantages de l'approche déclarative sont des définitions de workflow peut-être plus faciles à comprendre (même si c'est un YAML, berk !) la visualisation, l'évolutivité et une maintenance simplifiée (on peut juste changer le YAML dans la console, comme au bon vieux temps du PHP en prod). Les inconvénients de l'approche impérative incluent le besoin de connaissances en programmation, les défis potentiels en matière de maintenance et la gestion des conteneurs. Les inconvénients de l'approche déclarative incluent une création YAML pénible, un contrôle de parallélisation limité, l'absence d'émulateur local et un débogage moins intuitif. Le choix entre les approches dépend des exigences du projet, la déclarative étant adaptée aux workflows plus simples. L'article conclut que la planification déclarative peut aider les agents IA à rester concentrés et prévisibles. Outillage Vulnérabilité des proxy Maven https://github.blog/security/vulnerability-research/attacks-on-maven-proxy-repositories/ Quelque soit le langage, la techno, il est hautement conseillé de mettre en place des gestionnaires de repositories en tant que proxy pour mieux contrôler les dépendances qui contribuent à la création de vos produits Michael Stepankin de l'équipe GitHub Security Lab a cherché a savoir si ces derniers ne sont pas aussi sources de vulnérabilité en étudiant quelques CVEs sur des produits comme JFrog Artifactory, Sonatype Nexus, et Reposilite Certaines failles viennent de la UI des produits qui permettent d'afficher les artifacts (ex: mettez un JS dans un fichier POM) et même de naviguer dedans (ex: voir le contenu d'un jar / zip et on exploite l'API pour lire, voir modifier des fichiers du serveur en dehors des archives) Les artifacts peuvent aussi être compromis en jouant sur les paramètres propriétaires des URLs ou en jouant sur le nomage avec les encodings. Bref, rien n'est simple ni niveau. Tout système rajoute de la compléxité et il est important de les tenir à mettre à jour. Il faut surveiller activement sa chaine de distribution via différents moyens et ne pas tout miser sur le repository manager. L'auteur a fait une présentation sur le sujet : https://www.youtube.com/watch?v=0Z_QXtk0Z54 Apache Maven 4… Bientôt, c'est promis …. qu'est ce qu'il y aura dedans ? https://gnodet.github.io/maven4-presentation/ Et aussi https://github.com/Bukama/MavenStuff/blob/main/Maven4/whatsnewinmaven4.md Apache Maven 4 Doucement mais surement …. c'est le principe d'un projet Maven 4.0.0-rc-2 est dispo (Dec 2024). Maven a plus de 20 ans et est largement utilisé dans l'écosystème Java. La compatibilité ascendante a toujours été une priorité, mais elle a limité la flexibilité. Maven 4 introduit des changements significatifs, notamment un nouveau schéma de construction et des améliorations du code. Changements du POM Séparation du Build-POM et du Consumer-POM : Build-POM : Contient des informations propres à la construction (ex. plugins, configurations). Consumer-POM : Contient uniquement les informations nécessaires aux consommateurs d'artefacts (ex. dépendances). Nouveau Modèle Version 4.1.0 : Utilisé uniquement pour le Build-POM, alors que le Consumer-POM reste en 4.0.0 pour la compatibilité. Introduit de nouveaux éléments et en marque certains comme obsolètes. Modules renommés en sous-projets : “Modules” devient “Sous-projets” pour éviter la confusion avec les Modules Java. L'élément remplace (qui reste pris en charge). Nouveau type de packaging : “bom” (Bill of Materials) : Différencie les POMs parents et les BOMs de gestion des dépendances. Prend en charge les exclusions et les imports basés sur les classifiers. Déclaration explicite du répertoire racine : permet de définir explicitement le répertoire racine du projet. Élimine toute ambiguïté sur la localisation des racines de projet. Nouvelles variables de répertoire : ${project.rootDirectory}, ${session.topDirectory} et ${session.rootDirectory} pour une meilleure gestion des chemins. Remplace les anciennes solutions non officielles et variables internes obsolètes. Prise en charge de syntaxes alternatives pour le POM Introduction de ModelParser SPI permettant des syntaxes alternatives pour le POM. Apache Maven Hocon Extension est un exemple précoce de cette fonctionnalité. Améliorations pour les sous-projets Versioning automatique des parents Il n'est plus nécessaire de définir la version des parents dans chaque sous-projet. Fonctionne avec le modèle de version 4.1.0 et s'étend aux dépendances internes au projet. Support complet des variables compatibles CI Le Flatten Maven Plugin n'est plus requis. Prend en charge les variables comme ${revision} pour le versioning. Peut être défini via maven.config ou la ligne de commande (mvn verify -Drevision=4.0.1). Améliorations et corrections du Reactor Correction de bug : Gestion améliorée de --also-make lors de la reprise des builds. Nouvelle option --resume (-r) pour redémarrer à partir du dernier sous-projet en échec. Les sous-projets déjà construits avec succès sont ignorés lors de la reprise. Constructions sensibles aux sous-dossiers : Possibilité d'exécuter des outils sur des sous-projets sélectionnés uniquement. Recommandation : Utiliser mvn verify plutôt que mvn clean install. Autres Améliorations Timestamps cohérents pour tous les sous-projets dans les archives packagées. Déploiement amélioré : Le déploiement ne se produit que si tous les sous-projets sont construits avec succès. Changements de workflow, cycle de vie et exécution Java 17 requis pour exécuter Maven Java 17 est le JDK minimum requis pour exécuter Maven 4. Les anciennes versions de Java peuvent toujours être ciblées pour la compilation via Maven Toolchains. Java 17 a été préféré à Java 21 en raison d'un support à long terme plus étendu. Mise à jour des plugins et maintenance des applications Suppression des fonctionnalités obsolètes (ex. Plexus Containers, expressions ${pom.}). Mise à jour du Super POM, modifiant les versions par défaut des plugins. Les builds peuvent se comporter différemment ; définissez des versions fixes des plugins pour éviter les changements inattendus. Maven 4 affiche un avertissement si des versions par défaut sont utilisées. Nouveau paramètre “Fail on Severity” Le build peut échouer si des messages de log atteignent un niveau de gravité spécifique (ex. WARN). Utilisable via --fail-on-severity WARN ou -fos WARN. Maven Shell (mvnsh) Chaque exécution de mvn nécessitait auparavant un redémarrage complet de Java/Maven. Maven 4 introduit Maven Shell (mvnsh), qui maintient un processus Maven résident unique ouvert pour plusieurs commandes. Améliore la performance et réduit les temps de build. Alternative : Utilisez Maven Daemon (mvnd), qui gère un pool de processus Maven résidents. Architecture Un article sur les feature flags avec Unleash https://feeds.feedblitz.com//911939960/0/baeldungImplement-Feature-Flags-in-Java-With-Unleash Pour A/B testing et des cycles de développements plus rapides pour « tester en prod » Montre comment tourner sous docker unleash Et ajouter la librairie a du code java pour tester un feature flag Sécurité Keycloak 26.1 https://www.keycloak.org/2025/01/keycloak-2610-released.html detection des noeuds via la proble base de donnée aulieu echange reseau virtual threads pour infinispan et jgroups opentelemetry tracing supporté et plein de fonctionalités de sécurité Loi, société et organisation Les grands morceaux du coût et revenus d'une conférence. Ici http://bdx.io|bdx.io https://bsky.app/profile/ameliebenoit33.bsky.social/post/3lgzslhedzk2a 44% le billet 52% les sponsors 38% loc du lieu 29% traiteur et café 12% standiste 5% frais speaker (donc pas tous) Ask Me Anything Julien de Provin: J'aime beaucoup le mode “continuous testing” de Quarkus, et je me demandais s'il existait une alternative en dehors de Quarkus, ou à défaut, des ressources sur son fonctionnement ? J'aimerais beaucoup avoir un outil agnostique utilisable sur les projets non-Quarkus sur lesquels j'intervient, quitte à y metttre un peu d'huile de coude (ou de phalange pour le coup). https://github.com/infinitest/infinitest/ Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 6 mars 2025 : DevCon #24 : 100% IA - Paris (France) 13 mars 2025 : Oracle CloudWorld Tour Paris - Paris (France) 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19-21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27-28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 4 avril 2025 : aMP Orléans 04-04-2025 - Orléans (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 16-18 avril 2025 : Devoxx France - Paris (France) 23-25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day 2025 - Strasbourg (France) 29-30 avril 2025 : MixIT - Lyon (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

javaswag
#75 - Илья Ильиных - Голэнг как убица Джавы, и Вим

javaswag

Play Episode Listen Later Feb 4, 2025 154:50


В 75 выпуске подкаста Javaswag поговорили Ильей Ильиных о том почему Голэнг лучше Джавы, и как Вим делает из вас лучшего разразботчика 00:00 Переход от Java к Go 06:13 Проблемы с Optional и его использование 11:20 Использование Optional в Java 18:30 Важность форматирования кода 23:42 Проблемы и решения в команде 31:05 Переход на Vim и его связь с GoLang 36:30 Проблемы с Gradle и тестами 44:51 Проекты и архитектура микросервисов 51:03 Сравнение Go и Java 56:13 Погружение в Go и его особенности 01:02:17 Инструменты и туллинг в Go 01:10:36 Мутационное тестирование и его важность 01:16:09 Сравнение тестирования в Java и Go 01:24:44 Принципы написания устойчивых тестов 01:31:32 Тестирование как черный ящик 01:37:13 Интерфейсы в Go и Java 01:43:09 Обработка ошибок в Go и Java 01:48:18 Теория монады и её применение в Java 01:53:35 Проблемы написания больших проектов на Go 01:58:54 Новые возможности Go и использование генериков 02:04:50 Итераторы в Go и Lua 02:13:26 Эффективные методы работы с кодом 02:19:12 Непопулярные мнения о потоках в Java и Kotlin 02:24:34 Проблемы и решения в Java и Kotlin Гость Илья из @kydavoiti Ссылки: https://github.com/ilyasyoy Ссылки на подкаст: Сайт - https://javaswag.github.io/ Телеграм - https://t.me/javaswag Youtube - https://www.youtube.com/@javaswag Linkedin - https://www.linkedin.com/in/volyihin/ X - https://x.com/javaswagpodcast

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for January 28th, 2025 - Episode 227

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Jan 28, 2025 56:54


2025-01-28 Weekly News — Episode 227Watch the video version on YouTube at https://youtube.com/live/H8Ht5xYgUFA?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsEric Peterson - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support (resplendent)We have 62 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsBeware that ColdFusion 2021 end-of-life (and end of updates) is coming Nov 2025, and your optionsAre you still running ColdFusion 2021? While it's still supported/updated by Adobe, did you know that its end-of-life is coming just several months from now, Nov 10, 2025? That's the date when "core" support for that release ends--meaning no more updates from Adobe after that, not even security fixes.What about more recent releases, if you may wonder? CF 2023 (the current latest release) will get updates into 2028 (5 years after it was release). And there's the coming CF 2025 release, currently in pre-release (as I have recently blogged about), which is of course a great sign for the continued vitality of CF.But this looming deadline for CF2021 is a reminder that as the years roll on, we not only get new versions but we must say good-bye to old ones.Wondering what you can do? or when CF2023 or CF2025 support will end also? And what's the difference between "core" support and the available "extended" support which Adobe sells? (The extended support plan does NOT provide updates beyond this coming November.) For more on these, including official Adobe documentation that discusses such things, as well as my thoughts on migration, costs, various options to consider, and more, do read on.https://www.carehart.org/blog/2025/1/9/coldfusion2021_end_of_life_nov_2025  Java updates of Jan 21, 2025 for 8, 11, 17, 21, and 23: thoughts and resourcesIt's that time again: there are new JVM updates released today (Jan 21, 2025) for the current long-term support (LTS) releases of Oracle Java, 8, 11, 17, and 21, as well as the new short-term release 23. (The previous short-term release, Java 22, is no longer updated.)TLDR: The new updates are 1.8.0_441 (aka 8u441), 11.0.26, 17.0.14, 21.0.6, and 23.0.2, respectively. Crazy that there are now 5 current Java releases, I realize. More below, including more on each of them including what changed as well as bug fixes and the security fixes each version contains (including their CVE scores regarding urgency of concerns), which are offered in Oracle resources I list below.Oracle calls these updates "critical patch updates" (yep, "CPU"), but they are in fact scheduled quarterly updates, so that the "critical" aspect of this nomenclature may sometimes be a bit overstated. As is generally the case with these Java updates, most of them have the same changes and fixes across the four JVM versions, though not always.For some folks, that's all they need to hear. For others, read on.Whether this is your first time updating Java or your fiftieth, there are some things that you may or may not know.https://www.carehart.org/blog/2025/1/21/java_updates_jan_2025 Into the Box Round 1 of Sessions and Workshops are now out!Our first round of sessions and workshops for Into the Box 2025 is here! Get ready to dive into a world of modern web development with hands-on workshops and engaging sessions led by Ortus Solutions and Community CFML and BoxLang experts. Visit intothebox.org to explore what's in store—this is just the beginning, with much more content coming soon! https://www.ortussolutions.com/blog/into-the-box-round-1-of-sessions-and-workshopsLast chance to save 25% off CF2023, for those on CF2018 and earlierHere's great news for those still running CF2018 or earlier, who may have been holding off upgrading to CF2023 (because you would have to pay full price for it). It's news I first shared back in July, and the deal has been extended one last time, thus this post.TLDR; Now through Feb 28, 2025 those running CF9, 10, 11, 2016 or 2018 can upgrade to CF2023 for 25% off its full price. (Those running CF2021 can already/always could upgrade at 50% off the full price.)This is a deal offered only by Intergral, makers of FusionReactor, who are also resellers of CF. Adobe doesn't even offer this deal themselves. For more, see their blog post at https://fusion-reactor.com/blog/news/save-25-on-adobe-cf2023-upgrades/.Act now, it could save you hundreds or even thousands of $$s on a single license! https://www.carehart.org/blog/2025/1/17/last_chance_for_cf_upgrade_discount_from_cf2018_or_earlier New Releases and UpdatesTestBox v6.1.0 releaseWe're super excited to announce the release of TestBox 6.1.0! This release introduces native support for BoxLang without the need for a compatibility mode, unlocking new possibilities for developers embracing BoxLang's dynamic capabilities. Alongside this exciting update, we've added valuable features, improved functionality, and resolved key issues to ensure a smoother and more robust testing experience. Dive into the details and see how TestBox 6.1.0 makes your testing even more seamless and efficient!https://www.ortussolutions.com/blog/testbox-v610-releasecbMockData v4New name and new support for BoxLang!https://forgebox.io/vie...

Inside Java
“Doc, JavaDoc and Markdown” with Jonathan Gibbons

Inside Java

Play Episode Listen Later Jan 21, 2025 56:23


Java leads by example regarding documentation: JavaDoc inspires trust in developers through its transparency on each Java API functionality, and the javadoc tool helps developers generate equally great documentation for their APIs and libraries. In this episode, Ana hosts Jonathan Gibbons, core contributor and maintainer of JDK tools, to discuss JavaDoc/javadoc developments, focusing on markdown in JavaDoc documentation comments. Given the importance of having code that is as easy to understand as it is functional, Jonathan dives into significant changes in Java's documentation component and associated tools, how JavaDoc is maintained, code documentation practices, and more.

javaswag
#74 - Дерар Бакр - риалтайм системы на JVM стэке

javaswag

Play Episode Listen Later Jan 20, 2025 158:27


В 74 выпуске подкаста Javaswag поговорили с Дераром о реалтайм атрибуции в рекламной сети построенной на JVM стэке 00:00 Начало 05:46 Преимущества и недостатки платформы JVM 11:56 Размышления о будущем Java 17:12 Что такое AppFlyer? 22:20 Обработка больших данных в AppFlyer 28:16 Архитектура обработки данных и Kafka 36:40 Clojure 42:49 Эффективность и выразительность кода на Clojure 49:17 Java и Clojure 55:24 Проблемы экосистемы Clojure 01:02:40 Знание основ Java для Clojure-разработчиков 01:11:19 Масштабирование и партиции в Kafka 01:16:24 Подсчет удалений приложений 01:22:57 Инструменты для масштабирования и обработки данных 01:30:09 Минимальные знания для разработчиков многопоточных приложений 01:39:31 Блокирующий и неблокирующий ввод-вывод 01:45:49 Сложности работы с данными и идентификаторами 01:52:55 Опыт работы в крупных компаниях и корпоративной среде 01:59:26 Менторинг 02:05:46 Роль софт-скилов в карьере инженера 02:11:20 Анонимные отзывы 02:18:11 AI 02:24:22 Непопулярное мнение о потоках в Java 02:32:33 Асинхронность и производительность в современных системах Гость https://www.linkedin.com/in/derarbakr/ Ссылки: Optimizing 25PB Storage https://docs.google.com/presentation/d/1H8Kw3lBAw_HqK_4ZTWFT-a1EuVwNgB4j/edit Ссылки на подкаст: Сайт - https://javaswag.github.io/ Телеграм - https://t.me/javaswag Youtube - https://www.youtube.com/@javaswag Linkedin - https://www.linkedin.com/in/volyihin/ X - https://x.com/javaswagpodcast

GOTO - Today, Tomorrow and the Future
JVM Performance Engineering • Monica Beckwith & Kirk Pepperdine

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Jan 17, 2025 51:04 Transcription Available


This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRead the full transcription of the interview hereMonica Beckwith - Performance Engineer at Microsoft & Author of "JVM Performance Engineering"Kirk Pepperdine - Principal Java Engineer at MicrosoftRESOURCESMonicahttps://x.com/mon_beckhttps://github.com/mo-beckhttps://www.linkedin.com/in/monicabeckwithhttps://www.codekaram.comKirkhttps://x.com/javaperftuninghttps://github.com/kcpeppehttps://www.linkedin.com/in/kirk-pepperdinehttps://www.kodewerk.comDESCRIPTIONKirk Pepperdine and Monica Beckwith delve into the evolving world of performance engineering, focusing on Monica's book, JVM Performance Engineering.They discuss key advancements in the Java Virtual Machine (JVM) since JDK 8, including garbage collection and cloud-native applications. The conversation underscores the significance of experimental design and benchmarking, advocating for a collaborative approach that blends theoretical knowledge with practical application.Monica emphasizes the future of performance engineering lies in automation, AI, and machine learning, urging engineers from various disciplines to work together to navigate the complexities of distributed systems effectively.RECOMMENDED BOOKSMonica Beckwith • JVM Performance Engineering • https://amzn.to/3BkRoiOVenkat Subramaniam • Cruising Along with Java • https://amzn.to/4dFuBwUMarkus Eisele & Natale Vinto • Modernizing Enterprise Java • https://amzn.to/3EsEtZ3Kevlin Henney & Trisha Gee • 97 Things Every Java Programmer Should Know • https://amzn.to/3kiTwJJDave Thomas & Andy Hunt • The Pragmatic Programmer • https://amzn.to/3azvUy3Joshua Bloch • Effective Java • https://amzn.to/3ygmQJtDiana Montalion • Learning Systems Thinking • https://amzn.to/3ZpycdJBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

The Ride with JMV Podcast
Best Of JMV 1-15-25

The Ride with JMV Podcast

Play Episode Listen Later Jan 15, 2025 52:37


00:00 - 19:56 - Noah Eagle of Peacock joins the show! Noah was on the call of IU-Illinois last night, and he gives some insight into what he saw from both the Fighting Illini and the Hoosiers! They also talk about the NIL, and how that has affected teams. Noah and JMV then talk the NBA, and the Cleveland Cavaliers. 19:57 - 42:34 - Kevin Bowen from the morning show The Wake-Up Call w/KB & Andy joins the show! Kevin and JMV discuss the Colts trip to Germany to host a regular season game next season. They talk about how the overseas game will affect season-ticket holders. Kevin and JVM then discuss the Colts roster, and what pieces had a good season in 2024. 42:35 - 52:37 - George Padjen, the head coach of the Indy Ignite volleyball team, joins the show! George and JMV talk about the Ignite’s debut match last week! They also look ahead to their next game at the Fisher’s Event Center!Support the show: https://1075thefan.com/the-ride-with-jmv/See omnystudio.com/listener for privacy information.

The Ride with JMV Podcast
Full Show: IU Gets Blown Out Again, Pacers Lose To Cavs + More!

The Ride with JMV Podcast

Play Episode Listen Later Jan 15, 2025 131:19


00:00 – 25:34 -– JMV opens the show by venting his frustrations with Indiana Basketball following their blowout loss to Illinois last night. He talks about the lack of effort from the Hoosiers, and how it’s okay to boo them considering they’re getting paid like pro athletes now. He also talks the Colts playing in Berlin, as well as the Pacers loss to the Cavs last night. 25:35 – 37:48 – JMV talks some German bands with the Colts headed to Berlin. He then takes a call from a listener of the show! 37:49 – 43:34 – JMV wraps up the first hour by talking about the Pacers, and more about the Indiana Hoosiers! 43:35 – 1:10:18 – Kevin Bowen from the morning show The Wake-Up Call w/KB & Andy joins the show! Kevin and JMV discuss the Colts trip to Germany to host a regular season game next season. They talk about how the overseas game will affect season-ticket holders. Kevin and JVM then discuss the Colts roster, and what pieces had a good season in 2024. 1:10:19 – 1:21:48 – JMV continues to talk about the Colts trip to Berlin next season. He then circles back to IU basketball, and the dire state of the program. 1:21:49 – 1:26:37 – JMV wraps up the 2nd hour by reading some comments from listeners of the show! 1:26:38 – 1:50:06 – Noah Eagle of Peacock joins the show! Noah was on the call of IU-Illinois last night, and he gives some insight into what he saw from both the Fighting Illini and the Hoosiers! They also talk about the NIL, and how that has affected teams. Noah and JMV then talk the NBA, and the Cleveland Cavaliers. 1:50:07 – 2:05:49 – George Padjen, the head coach of the Indy Ignite volleyball team, joins the show! George and JMV talk about the Ignite’s debut match last week! They also look ahead to their next game at the Fisher’s Event Center! 2:05:50 – 2:11:18 – JMV wraps up the show by talking more about the Colts Berlin game, and by reading more comments from listeners! Support the show: https://1075thefan.com/the-ride-with-jmv/See omnystudio.com/listener for privacy information.

Sporthuset
487. Chris Härenstam: ”Jag har ett ansvar för spelarnas hälsa”

Sporthuset

Play Episode Listen Later Jan 9, 2025 71:37


SVT-kommentatorn Chris Härenstam, tillsammans med Lasse Granqvist och Tommy Åström, i ett frispråkigt samtal om konsten att kommentera i tv. Om språk, klyschor, målvrål och det ansvarsfulla valet av kritiska ord. ”Jag vet flera nuvarande NHL-spelare som velat åka hem från JVM-turneringar”. Efter hat och attacker på sociala medier mot Juniorkronornas målvakt är frågan om det stora intresset för juniorishockeyn har spårat ur. Vad kan göras inom de stora publiksporterna för att förebygga psykisk ohälsa? Osannolika 96 sporter har Chris Härenstam kommenterat under sin karriär, och givetvis är kanotpolo en av dessa apropå veckans kärleksbombning. Och nu börjar Handbolls-VM, vilket såklart innebär ”Stad i ljus”. Hosted on Acast. See acast.com/privacy for more information.

Inside Java
"Inside Java Weekly: JDK 24 and More" with Chad

Inside Java

Play Episode Listen Later Dec 20, 2024 15:01


In this shorter-format pod, Chad talks about JDK 24, preview features, and more.   Show Notes JEP 11: Incubator Modules https://openjdk.org/jeps/11 JEP 12: Preview Features https://openjdk.org/jeps/12 Using the Preview Features Available in the JDK https://dev.java/learn/new-features/using-preview/ JEP 483: Ahead-of-time Compilation https://openjdk.org/jeps/483 JEP 485: Stream Gatherers https://openjdk.org/jeps/485 JEP 491: Synchronize Virtual Threads without Pinning https://openjdk.org/jeps/491 JEP 494: Module Import Declarations (Second Preview) https://openjdk.org/jeps/494 JEP 495: Simple Source Files and Instance Main Methods (Fourth Preview) https://openjdk.org/jeps/495 The Foreign Function and Memory API https://dev.java/learn/ffm/  

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for December 10th, 2024 - Episode 224

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Dec 10, 2024 49:04


2024-12-10 Weekly News — Episode 224Watch the video version on YouTube at https://youtube.com/live/bV2CxQprVQM?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsGrant Copley - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes  Patreon Support (jolly)We have 59 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsAdobe CF2025 Beta is now openGet an exclusive sneak peek at what's next for ColdFusion! Sign up for the ColdFusion 2025 Beta Program and get early access to shape the future with us. Engage with the community in the forums, share your thoughts, and keep up with the newest updates and features.Make Your Voice Heard and Win Big!Join our weekly engagement challenge during the ColdFusion 2025 Beta! Every week, we'll reward top contributors with exciting prizes. Your feedback matters, and the more you share, the higher your chances of winning. Don't miss out—get involved and be rewarded!Deep Dive Sessions and Demo Code Access!We're thrilled to announce that for each feature in ColdFusion 2025, there will be exclusive deep-dive sessions hosted by the engineers who built it. These sessions will offer valuable insights and in-depth explanations straight from the experts.Additionally, we've set up a central GIT repository where all demo code will be hosted. This is the same repository where Mark has already pushed his code, and it will be the hub for all developers to share their contributions. Check out the GIT links and start exploring the code!Check out the curated webinar schedule and join us! https://coldfusion.adobe.com/2024/11/code-the-future-join-the-coldfusion-2025-beta-today/TestBox Latest Updates and News!Did You Miss It? The New TestBox Site & v6.0 Are Here!Share Your Feedback and Get Featured on Our Site!We're thrilled to have launched the new TestBox website and TestBox 6.0! If you haven't had a chance to explore yet, visit TestBox to discover updated documentation, powerful resources, and features that make testing more efficient than ever.https://www.ortussolutions.com/blog/testbox-updates-and-news New Releases and UpdatesICYMI - CommandBox 6.1.0 Released!We are pleased to announce the release of CommandBox 6.1.0, the latest release of our CLI, REPL, and Server, and Package Manager.  This is a minor update to our last release.  It has a handful of new features, and bug fixes, as well as better out-of-the-box support for BoxLang, our new CFML-compatible JVM language.New FeaturesWebSocket ServerUpdates to run BoxLangAdd command to deploy Lucee lex or lco filesCheck if an entry has a hash associated to it and validate itImprovementSort by date last started when finding a server by web rootMake semantic version prerelease identifiers not case sensitivedefault servlet pass predicate include Boxlang filesTasksUpdate to Undertow 2.2.33.FinalUpdate to Lucee 5.4.6.9Update bundled JRE to 11.0.23+99 Bugshttps://www.ortussolutions.com/blog/commandbox-610-released https://commandbox.ortusbooks.com/ BoxLang Beta 23 and 24 Released12/2/24 - BoxLang 1.0.0 Beta 24 Launched3 New Features6 Improvements4 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-24-launched11/23/24 - BoxLang 1.0.0 Beta 23 Launched4 New Features2 Improvements1 Tasks10 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-23-launched Webinars, Meetups and WorkshopsOnline CF Meetup - From Development to Deployment: Load Testing ColdFusion Applications with Dakota ClumThursday, December 12, 20249:00 AM to 10:00 AM PSTThis session will go over how to load test a ColdFusion application after it is deployed in your environment. We will cover load testing options, setting up a simulated load test, and tuning adjustments that can be made as a result of load testing.https://www.meetup.com/coldfusionmeetup/events/304881310/ ADOBE CF 2025 Beta - SeminarsColdFusion 2025: What's new and exciting Mark Takata December 2, 2024Security and Stability in ColdFusion Parvathy and Atul December 3, 2024VS Code plugin changesVikas YadavDecember 4, 2024Smart language additions in ColdFusionAshudeep SharmaDecember 5, 2024Performance enhancementsSatyam MishraDecember 9, 2024Unleash the power of Revamping CFCharts for modern applicationsYukti AgrawalDecember 10, 2024Spreadsheets & CSV ProcessingNikhil DubeyDecember 11, 2024Microsoft Graph Integration in ColdFusion: Unlocking data with OauthShiva MarellaDecember 13, 2024What's new in containersSuchikaDecember 17, 2024Recap and QnAMark TakataDecember 18, 2024CFCasts Content Updateshttps://www.cfcasts.comConferences and TrainingICYMI - CF Summit India 2024Join us for the Adobe ColdFusion India Summit, a premier, completely free event where developers, indust...

Idaho Sports Talk
BOB, VIRGIN-MORGAN ON HIS BREAKOUT SEASON AT EDGE

Idaho Sports Talk

Play Episode Listen Later Dec 4, 2024 7:43


BRONCO FOCUS EVERY MONDAY-FRIDAY AT 3:45 P.M.: Bob Behler, the voice of Boise State athletics, joins Prater and Mallory to share his one-on-one interview with sophomore EDGE rusher Jayden Virgin-Morgan. Earlier this week, JVM was named first-team All-Mountain West.

Idaho Sports Talk
BOB, VIRGIN-MORGAN ON HIS BREAKOUT SEASON AT EDGE

Idaho Sports Talk

Play Episode Listen Later Dec 4, 2024 8:24


BRONCO FOCUS EVERY MONDAY-FRIDAY AT 3:45 P.M.: Bob Behler, the voice of Boise State athletics, joins Prater and Mallory to share his one-on-one interview with sophomore EDGE rusher Jayden Virgin-Morgan. Earlier this week, JVM was named first-team All-Mountain West.See omnystudio.com/listener for privacy information.

Prater & The Ballgame
BOB, VIRGIN-MORGAN ON HIS BREAKOUT SEASON AT EDGE

Prater & The Ballgame

Play Episode Listen Later Dec 4, 2024 8:24


BRONCO FOCUS EVERY MONDAY-FRIDAY AT 3:45 P.M.: Bob Behler, the voice of Boise State athletics, joins Prater and Mallory to share his one-on-one interview with sophomore EDGE rusher Jayden Virgin-Morgan. Earlier this week, JVM was named first-team All-Mountain West.See omnystudio.com/listener for privacy information.

Spring Snyggt - med Jesus och Manne
262. Fienderna möts igen, Ottfalk vs Myhlback

Spring Snyggt - med Jesus och Manne

Play Episode Listen Later Dec 4, 2024 109:08


Två av Sveriges största konditionstalanger någonsin återförenas! Alvar Myhlback, den artonårige längdskidåkaren som fick specialdispens att åka Vasaloppet redan som sextonåring, trots sin ringa ålder. Senast stod han för en tidig utbrytning som ledde brons för egen del och guld för lagkamraten Torleif Syrstad.  Karl Ottfalk, även han arton år gammal, har blivit fyra i terräng-JVM mot betydligt äldre motståndare, tagit guld i ungdoms-OS och ett sensationellt SM-guld på 5000 meter i seniorklassen.  Under många års tid tävlade de mot varandra på Lilla Lidingöloppet. För bägge var det årets största batalj och de var varandras värsta fiender och främsta inspiratörer. Nu återförenas de i podden och berättar om vad som har hänt sedan den galna spurtuppgörelsen på Grönsta gärde 2017. Vad är likheterna och skillnaderna i träning mellan en längdskidåkare och en löpare?  John har återhämtat sig från förkylning och sprungit långpass i habila farter. Manne fortsätter att springa snäppet snabbare än vad hans baksida tillåter.  Sponsorer: Flowlife, Saucony och Löplabbet.   Löppass, julfika och shopping (20% rabatt!) på Löplabbet Kungsgatan med Jesus och Manne den 11/11. Anmäl dig här!   Bli Patreon och få extramaterial, tillgång till en exklusiv Facebookgrupp och veckovisa träningsprogram av John.  

javaswag
#73 - Сева Брекелов - автоматизация, видеостриминг и AI для Miro

javaswag

Play Episode Listen Later Nov 24, 2024 128:36


В 73 выпуске подкаста Javaswag поговорили с Севой Брекеловым о автоматизация тестирования, платформе видеостриминга и AI для Miro 00:00 Начало 12:07 Путь к автоматизации тестирования 25:22 Переход от тестирования к программированию 32:43 Работа в Google и Nike 39:11 Управление заказами и функциональные возможности 42:26 Тестирование и проблемы с микросервисами 45:15 Внедрение тест-контейнеров в процесс разработки 48:06 Apache Camel 55:35 Работа контрактором и синдром самозванца 01:05:26 PMM 01:08:10 Создание видеостриминговой платформы, WebRTC 01:32:07 Генерация синтетических данных для банков 01:35:21 Miro и AI 01:51:22 Инфраструктура 01:53:45 Роль принципала 01:57:33 Непопулярные мнения о Spring Boot 02:02:36 Непопулярное мнение Гость https://www.linkedin.com/in/brekelov/ Ссылки: Гитхаб Севы - https://github.com/volekerb/volekerb Engineer Readings канал - https://t.me/engineerreadings ICE Protocol - https://datatracker.ietf.org/doc/html/rfc5245 Плейлист шоу “Ошибка Выжившего” - https://www.youtube.com/playlist?list=PLsVTVVvrKX9ulEqi0KeI-NYXSNCO4o1x0 Miro AI - https://miro.com/ai/ Ссылки на подкаст: Сайт - https://javaswag.github.io/ Телеграм - https://t.me/javaswag Youtube - https://www.youtube.com/@javaswag Linkedin - https://www.linkedin.com/in/volyihin/ X - https://x.com/javaswagpodcast

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for November 19th, 2024 - Episode 223

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Nov 19, 2024 61:01


2024-11-19 Weekly News — Episode 223Watch the video version on YouTube at https://youtube.com/live/bFX1uaN5Hec?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsEric Peterson - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes  Patreon Support (jolly)We have 59 patreons: https://www.patreon.com/ortussolutions. New Releases and UpdatesCommandBox 6.1.0 Released!We are pleased to announce the release of CommandBox 6.1.0, the latest release of our CLI, REPL, and Server, and Package Manager.  This is a minor update to our last release.  It has a handful of new features, and bug fixes, as well as better out-of-the-box support for BoxLang, our new CFML-compatible JVM language.New FeaturesWebSocket ServerUpdates to run BoxLangAdd command to deploy Lucee lex or lco filesCheck if an entry has a hash associated to it and validate itImprovementSort by date last started when finding a server by web rootMake semantic version prerelease identifiers not case sensitivedefault servlet pass predicate include Boxlang filesTasksUpdate to Undertow 2.2.33.FinalUpdate to Lucee 5.4.6.9Update bundled JRE to 11.0.23+99 Bugshttps://www.ortussolutions.com/blog/commandbox-610-released https://commandbox.ortusbooks.com/ BoxLang Beta 22 Released11/15/24 - BoxLang 1.0.0 Beta 22 Launched4 New Features10 Improvements14 Bugs Fixedhttps://boxlang.ortusbooks.com/readme/release-history/1.0.0-beta22 Webinars, Meetups and WorkshopsBoost Your Confidence & Silence the Inner Critic: Overcome Imposter Phenomenon!Sac Interactive Tech Meetup • Sacramento, CAWed, Nov 20 · 6:00 PM PSThttps://www.meetup.com/sacinteractive/events/303708476/?eventOrigin=home_page_upcoming_events$all Online ColdFusion Meetup - "ColdFusion Horizons: Unveiling 2025", Charvi Dhoot (CF Product Manager)--CFMeetup #311Nov 21 at 12p US EasternOver the years, ColdFusion has evolved to support not only the development of dynamic web pages but also the creation of complex applications and services. It remains a popular choice for developers seeking a versatile and efficient platform for building web-based solutions and business applications.As we look ahead to the imminent release of ColdFusion 2025, we invite you to join us for an exclusive feature showcase that highlights the compelling reasons to adopt this upcoming version. Additionally, this session will provide a comprehensive overview of the strategic vision for future releases, offering an opportunity for you to engage with new ideas under consideration and contribute valuable feedback.Attend this talk to gain deeper insight into the current and future releases of ColdFusion on the horizon!https://www.meetup.com/coldfusionmeetup/events/304633294/ CFCasts Content Updateshttps://www.cfcasts.comIntroducing SocketBoxThis innovative library simplifies WebSocket integration, making real-time features and message handling a breeze. Build engaging applications with SocketBox – get started today! #CFML #BoxLang #WebSockets #RealTime https://www.cfcasts.com/series/ortus-bytes/videos/introducing-socketbox Conferences and TrainingCF Summit India 2024Join us for the Adobe ColdFusion India Summit, a premier, completely free event where developers, industry experts, and thought leaders come together to explore the latest in ColdFusion and web development. Network with peers, meet the ColdFusion engineering team, get your questions answered, discover strategies to boost your career and grow your business—all at no cost.Price: FreeDecember 7, 20242 Cities: Bengaluru and NoidaRegister: https://cf-indiasummit-2024.attendease.com ITB 2025Location: Washington, DCDates: April 30, 2025 - May 2, 2025 - Washington, DCTickets and more info: https://t.co/cFLDUJZEyM50% off blind tickets$249.50 for the Conference$349.50 for the Conference + Workshop!!!Call for Speakers CLOSEDCFCamp 2025May 22, 23rd - 2025Atomis Hotel Munich Airporthttps://www.cfcamp.org/ Call for Speakers open - https://www.papercall.io/cfcamp2025More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Posts, and Videos of the Week11/19/24 - Blog - Ortus Solutions - 5 Signs It's Time to Modernize Your ColdFusion / CFML ApplicationColdFusion has long been a reliable platform for building web applications, but like any technology, it requires maintenance and modernization over time. Whether you're using Lucee or Adobe ColdFusion, it's critical to recognize the signs that your application is no longer meeting today's standards in performance, security, and scalability. Let's explore five clear indicators th...

javaswag
#72 - Александр Бармин - эволюция Спринга и архитектура Необанка

javaswag

Play Episode Listen Later Nov 18, 2024 114:03


В 72 выпуске подкаста Javaswag поговорили с Александром Барминым о Спринге и архитектуре Необанка 00:00 Начало 05:34 Значение доменной области в разработке 17:28 IBM FileNet и Java EE 22:45 Проблемы и эволюция Java EE 32:50 Spring и Spring Boot 48:10 Миграция между версиями Spring 56:05 Гибкость и сложности Spring Boot 01:01:02 Адаптация Spring к современным трендам 01:04:50 Проблемы зависимости от Spring 01:07:10 Конкуренция и эволюция Spring 01:14:49 Kotlin и Spring: синергия технологий 01:15:44 Эволюция TransferWise в Neobank 01:16:36 Архитектура Wise: микросервисы и AWS 01:19:21 Kubernetes и проблемы распределенных систем 01:24:55 Консистентность и механизмы реконсиляции 01:29:08 Управление микросервисами и версиями 01:33:20 Автоматизация обновления зависимостей 01:37:07 CI/CD и миграции баз данных 01:41:17 Деплой 01:46:49 Непопулярное мнение о языках программирования 01:50:00 Критика Spring Boot и его магии Гость https://www.linkedin.com/in/abarmin/ Ссылки: Канал Александра на YouTube - https://www.youtube.com/@ABarmin Канал Java & Spring Weekly в Telegram - https://t.me/java_weekly Wise Tech Stack - https://medium.com/wise-engineering/wise-tech-stack-2022-edition-a6ac089a382f Spring Cloud с Борисовым - https://youtu.be/4tSyz_v9w7Q Ссылки на подкаст: Сайт- https://javaswag.github.io/ Телеграм - https://t.me/javaswag Youtube - https://www.youtube.com/@javaswag Linkedin - https://www.linkedin.com/in/volyihin/ X - https://x.com/javaswagpodcast

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo's talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company

A Bootiful Podcast
engineer, CTO, teacher, and pilot Ken Sipe

A Bootiful Podcast

Play Episode Listen Later Nov 14, 2024 80:14


Hi, Spring-fans, JVM enjoyers, and cloud natives! Have I got a treat for you today! We're going to be talking to my longtime pal Ken Sipe. #groovy #java #kotlin #go #rust #spring #jvm

javaswag
#71 - Алексей Жидков - эргономичный подход и декомпозиция архитектуры

javaswag

Play Episode Listen Later Nov 12, 2024 108:51


В 71 выпуске подкаста Javaswag поговорили с Алексеем Жидковым об эргономичном подходе для разработки архитектуры проекта 00:00 Начало 12:06 Работа консультанта 17:38 Эргономичный подход и его принципы 26:44 Практика применения принципов разработки 30:55 Трудности внедрения DDD на практике 37:15 Популярность DDD и его реальная эффективность 39:33 TDD и его место в эргономичном подходе 41:00 Тестирование как основа разработки 43:55 Проблемы с моками в тестировании 48:50 Архитектурные подходы и JPA 51:01 Функциональная архитектура и ее влияние на разработку 55:36 Проблемы с ORM и Hibernate 01:00:03 Эргономичность и альтернативы ORM 01:01:53 Неизменяемая модель данных 01:05:58 Эргономичный подход в разработке 01:08:32 Обсуждение стека технологий и его эволюция 01:11:21 Эргономичный подход в разработке проектов 01:17:14 Проблемы объектно-ориентированного программирования 01:20:56 Декомпозиция системы и создание API 01:22:38 Тестирование и разработка по TDD 01:27:24 Экономика эргономичной архитектуры 01:30:59 Элементы эргономичного подхода 01:40:15 Проблемы многопоточности 01:42:58 Непопулярное мнение Гость - https://t.me/ergonomic_code Ссылки: Канал в телеграме https://t.me/ergonomic_code Сайт Алексея https://azhidkov.pro/ Многоликий принцип единственности ответственности - мой разбор формулировок и интерпретаций Single Responsibility Principle, которые даёт сам Анкл Боб. FizzBuzz Enterprise Edition - пример доведения Open-Closed Principle до абсурда SOLID Deconstruction - Kevlin Henney - c 28:23 докладчик говорит о том, что Liskov Substituion Principle является нонсенсом - для его соблюдения, вы не можете переобределять методы - только добавлять новые, про которые программа-клиент ничего не знает Domain-Driven Design: Tackling Complexity in the Heart of Software - та самая книга про DDD Принципы юнит-тестирования - самая крутая на сегодняшний день книга по тестированию бакендов Сайт Владимира Хорикова РЕПЕТИЦИЯ Структурный дизайн. Древний секрет простого и быстрого кода. - репетиция моего доклада на Joker ‘24 РЕПЕТИЦИЯ Функциональная архитектура и Spring Data JDBC. 4 года в проде, полёт отличны" - репетиция моего второго доклада на Joker ‘24, который в итоге стал Lightening Talk-ом Why is Java making so many things immutable? - пост в блоге Оракла, где автор пишет “чуваки, не парьтесь, GC заточен на быстрое создание объектов” Trainer Advisor - реальный проект по Эргономичному подходу Диаграмма эффектов - диаграмма, которую я использую для декопозиции ядра/домена/модели (сущностей и интеграций) на модули Алексей Жидков — Рациональный подход к декомпозиции систем на модули или микросервисы - мой доклад на JPoint ‘23 с алгоримтом декомпозиции диаграммы эффектов Lean Architecture: for Agile Software Development The Transformation Priority Premise, Code That Fits in Your Head - как выходить из тупика, когда в продовом коде захардкожен OK 200 и тест зелёный Как я превратил легаси-проект в конфетку за полгода. Том 1 - мой пост о том, как я переделал проект по Эргономичному подходу и ускорил работу команды в три раза Метрика Cognitive complexity или простой способ измерить сложность кода - лучшая альтеранитва цикломатической сложности Code Complexity - плагин для IDEA, который рисует когнитивную сложность прямо в редакторе Alan Kay at OOPSLA 1997 - The computer revolution hasnt happened yet - Алан Кей говорит, что не имел ввиду C++, когда придумывал термин ООП Dr. Alan Kay on the Meaning of “Object-Oriented Programming” - Алан Кей говорит, что имел ввиду под ООП Кип сейф!

A Bootiful Podcast
Baruch Sadogursky on Gradle, Java, developer productivity, and more

A Bootiful Podcast

Play Episode Listen Later Nov 7, 2024 54:13


Hi, Spring fans! In this installment, I talk to legendary Gradle Developer Productivity Engineering guru (formerly of JFrog) and hero to the JVM-language community, Baruch Sadogursky, recorded live from Dr. Venkat Subramaniam's amazing conference, Dev2Next 2024!

Modern Web
Modern Web Podcast S12E37- Java's AI Evolution: Semantic Caching JVM, and GenAI Architectures with Theresa Mamarella & Brian Sam-Bodden

Modern Web

Play Episode Listen Later Oct 29, 2024 24:32


In this episode of the Modern Web Podcast, Danny Thompson, Director of Technology at This Dot Labs, hosts a conversation with Theresa Mammarella, JVM engineer at IBM, and Brian Sam-Bodden, Applied AI Engineer at Redis. They explore their talks at JCONF in Dallas, Texas, covering topics like GenAI architectures in the Java community and OpenJDK's Project Valhalla. Their conversation covers Java's evolution, AI applications, semantic caching, and how these technologies are impacting development workflows and performance optimization. Chapters 00:00 - Introduction   01:00 - Brian on GenAI in the Java Community   01:47 - Java's Safe Evolution Path   02:17 - Teresa on Project Valhalla   03:54 - Value Classes and Performance   04:33 - Brian on Semantic Caching   06:54 - Challenges of Rewording Prompts   09:15 - What is RAG Architecture?   11:34 - Java's Role in AI   13:57 - Cost of LLMs and Caching Strategies   15:57 - Teresa on Java's Future   18:22 - Learning Resources for Java Developers   20:44 - Addressing Misconceptions About Java   22:39 - Final Thoughts   Follow Theresa Mammarella & Brian Sam on Social Media Theresa Mammarella Twitter: https://x.com/t_mammarella?lang=en Brian Sam-Bodden Twitter: https://x.com/bsbodden Theresa Mammarella Linkedin: https://www.linkedin.com/in/tmammarella/ Brian Sam-Bodden Linkedin:  https://www.linkedin.com/in/sambodden/

javaswag
#70 - Алексей Захарченко - аутстафинг, криптобиржа на Джаве и Редис

javaswag

Play Episode Listen Later Oct 24, 2024 125:57


В 70 выпуске подкаста Javaswag поговорили с Алексеем Захарченко о аутстафинге и построении криптобиржы на Джаве 00:00 Начало 05:48 Аутстаффинг и компания Кроссовер 20:17 Монструозные проекты и их последствия 26:15 Трекинг времени и его влияние на разработчиков 35:40 Читерство в системе трекинга 42:09 Bitso 46:08 Технологии и архитектура бирж 58:44 Монолит и его метрики 01:03:00 Выбор между Spring и Micronaut 01:09:00 Асинхронность и многопоточность в разработке 01:14:17 Redis и атомарные операции 01:20:31 Дробные числа 01:23:28 Хранимые процедуры в Redis 01:31:21 Redis-стримы 01:36:21 Управление нагрузкой и bull run 01:45:20 Баланс между риском и затратами 01:48:22 Различия между уровнями инженеров 01:53:48 Непопулярные мнения Гость - https://www.linkedin.com/in/chess/ Ссылки: https://medium.com/bitso-engineering/the-redis-streams-we-have-known-and-loved-e9e596d49a22 https://martinfowler.com/articles/lmax.html Кип сейф! 🖖

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for October 22nd, 2024 - Episode 221

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Oct 22, 2024 47:33


2024-10-22 Weekly News — Episode 221Watch the video version on YouTube at https://youtube.com/live/j-e_y4OwuCw?feature=shareHosts: Gavin Pickin - Senior Developer at Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 59 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsLucee 6.1.1 (6.1.1.100-RC) Release CandidateThere is a new Lucee 6.1.1.100-RC release candidate available for testing. Give it a try and share your feedback with us.What's New?This release focuses mainly on bug fixes, along with a few useful enhancements.https://dev.lucee.org/t/lucee-6-1-1-6-1-1-100-rc-release-candidate/14353 ColdFusion 2023 and 2021 October 15th, 2024 updatesWe are pleased to announce that we have released general updates to ColdFusion (2023 release) Update 11 and ColdFusion (2021 release) Update 17. The updates include bug fixes and enhancements in Administrator, Language, CFSetup, Database, and other areas. They also contain library upgrades, such as netty, ehcache, etc. The updates also include enhancements to whitespace management and client variable support in CFPM.Known issues in the updateThe PDF Services page in ColdFusion Administrator does not load even with the HTMLToPDF package installedhttps://coldfusion.adobe.com/2024/10/released-coldfusion-2023-and-2021-october-15th-2024-updates/CF Summit India AnnouncedWe are excited to announce that the Adobe ColdFusion India Summit 2024 is happening on December 7, 2024, and this year, we're bringing the event to two vibrant cities: Bengaluru and Noida. Whether you're a seasoned developer or just beginning your journey in web development, this free summit offers a unique opportunity to learn, connect, and grow with the best minds in the industry.https://coldfusion.adobe.com/2024/10/get-ready-for-adobe-coldfusion-india-summit-2024/ Announcing Java updates of Oct 2024 for 8, 11, 17, 21, and 23: thoughts and resourcesIt's that time again: there are new JVM updates released today (Oct 15, 2024) for the current long-term support (LTS) releases of Oracle Java, 8, 11, 17, and 21, as well as the new short-term release 23. (The previous short-term release, Java 22, is no longer updated.)TLDR: The new updates are 1.8.0_431 (aka 8u431), 11.0.25, 17.0.13, 21.0.5, and 23.0.1 respectively. Crazy that there are now 5 current Java releases, I realize. More below, including more on each of them including what changed as well as bug fixes and the security fixes each version contains (including their CVE scores regarding urgency of concerns), which are offered in Oracle resources I list below.https://www.carehart.org/blog/2024/10/15/java_updates_oct_2024 PayPal's NVP/SOAP API for Website Payments Pro accounts suddenly stopped working sometime early OctoberPayPal's NVP/SOAP API for Website Payments Pro accounts suddenly stopped working sometime around October 4th (possibly Sep 30). Some developers that reported having the issue were using legacy classic ASP and others were using ColdFusion. I believe we've been using the PayPal DoDirectPayment API since it was introduced back in 2002.At some point, PayPal added the following undated disclaimer to their documentation. (According to Microsoft Copilot, "PayPal's NVP (Name-Value Pair) API was marked as "legacy" around October 12th, 2021".)CFPayment (retired) supports WPP & Payflow, but not the new REST API method. Searching online for "ColdFusion (or cfml) paypal rest api" didn't return anything beneficial, so it became apparent that there was a need for a solution... any solution.James Moberg has an updated Paypal Rest API Cfc available here: https://dev.to/gamesover/coldfusion-paypal-rest-api-cfc-339p Secure Your ColdFusion Perpetual License Before Adobe's Subscription-Only SwitchFollowing Adobe's announcement at the Adobe ColdFusion Summit in Las Vegas, ColdFusion will transition to a subscription-only licensing model. This major shift in licensing strategy means developers and organizations have a limited window to secure their final perpetual ColdFusion license.While we don't know the date for the Adobe switch, FusionReactor customers have an exclusive opportunity to secure their last perpetual license and save significantly in the process. This final offer has been extended to December 31, 2024, giving organizations more months to make this crucial decision.https://fusion-reactor.com/blog/secure-your-coldfusion-perpetual-license-before-adobes-subscription-only-switch/ Microsoft Copilot is a little Snarky about ColdFusion

javaswag
#68 - Артём Бояршинов - платежи на Akke, распределенные системы и идентификаторы

javaswag

Play Episode Listen Later Oct 4, 2024 137:36


В 68 выпуске подкаста Javaswag поговорили с Артёмом Бояршиновым о платежах на Akke, распеределенных системах и индентификаторах 00:00 Начало 06:44 Переход в Java 12:16 Инструменты разработки и контроль версий 18:00 Опыт работы в проектной компании 23:52 Сложности и решения в условиях нагрузки 26:33 Оптимизация запросов в Postgres 31:31 Система быстрых платежей: Введение и рост 39:36 Технологии системы быстрых платежей 48:11 Предварительные этапы платежей и их механизмы 55:53 Архитектура и распределение нагрузки в системе 01:00:12 Сериализация корутин и миграция версий 01:04:50 Состояние и поведение акторов в Akka 01:11:02 Проблемы обновления и лицензирования Akka 01:13:51 Альтернативы Akka 01:17:17 Мониторинг и трассировка в системе быстрых платежей 01:23:23 Идентификаторы транзакций 01:29:24 Генерация идентификаторов в распределенных системах 01:38:27 Таймстемпы и их роль в идентификаторах 01:45:15 Проблемы с уникальностью идентификаторов 01:51:50 Генерация идентификаторов 01:54:22 Ответ на предыдущее непопулярное мнение 01:58:10 Непопулярное мнение 02:02:20 Блиц 02:09:54 Заключительные мысли и рекомендации Гость - https://github.com/Boiarshinov Ссылки: Доклад СБП. Платежные ссылки и где они обитают Доклад Распределенная генерация уникальных идентификаторов База знаний о программировании, которую Артём ведёт для себя Кип сейф!

A Bootiful Podcast
Bruce Eckel, ”Thinking in Java” author and one of the most influential authors in my career

A Bootiful Podcast

Play Episode Listen Later Oct 3, 2024 98:25


Hi, JVM fans! In this installment, I talk to Bruce Eckel, who wrote - among countless other classics - "Thinking in Java" and "Thinking in C++," both of which were instrumental in helping me find my way into IT, Java, and community. Bruce is a personal hero of mine and I was so glad to sit down and talk to him, live, from Dev2Next, in Denver, Colorado, about his storied career, his books, learnings, and his new book, "Effect Oriented Programming" . #Java #Scala #Kotlin #Rust #Python #C++ * https://effectorientedprogramming.com "Effect Oriented Programming" * https://podcasts.apple.com/us/podcast/happy-path-programming/id1531666706 - Bruce and our mutual friend James Ward have an awesome podcast, too!

javaswag
#69 - Дмитрий Чуйко - старт Джава проектов с нуля

javaswag

Play Episode Listen Later Oct 3, 2024 115:08


В 69 выпуске подкаста Javaswag поговорили с Дмитрием Чуйко о старте Джава проектов с нуля 00:00 Начало Chapters 00:00 Начало 10:09 Технологический стек и его эволюция 17:04 Рост и развитие в карьере разработчика 23:07 Путь к стартапу “Мой склад” 25:00 Технологии и архитектура SaaS системы 30:41 Проблемы и решения в разработке 32:09 Поиск и найм разработчиков 41:07 Баланс между MVP и масштабированием 46:52 Старт нового проекта: подход и стратегия 55:00 Коммуникация в команде: важность договоренностей 01:00:01 Кубернетес: необходимость и применение в проектах 01:05:57 Выбор технологий: как сузить набор для проекта 01:10:12 Эволюция Spring и Java 01:17:29 Котлин против Java: Сравнение и Тенденции 01:24:10 Финансовые аспекты разработки в стартапах 01:28:50 Архитектура микросервисов и управление данными 01:33:17 Временная консистентность и её применение 01:35:02 Переход к международной команде 01:36:46 Культурные различия в международной среде 01:38:40 Объективные метрики и карьерный рост 01:40:06 Подготовка к повышению и важность достижений 01:42:26 Метрики и их значение для бизнеса 01:45:04 Стартапы и выбор технологий 01:51:07 Роль тестов в разработке 01:53:02 Блиц Гость - https://www.linkedin.com/in/dchuiko/ Ссылки: tx outbox: https://github.com/gruelbox/transaction-outbox Кип сейф!

javaswag
#67 - Сергей Петрелевич - реактивные приложения, WebFlux, блокирующий код и Micronaut

javaswag

Play Episode Listen Later Sep 26, 2024 132:02


В 67 выпуске подкаста Javaswag поговорили с Сергеем Петрелевичем о WebFlux, блокирующем коде в реактивном приложении и Micronaut 00:00 Начало 02:52 Путь к Java от BASIC 05:59 Опыт работы с автоматизированными системами управления 08:56 Переход к Java и работа в банковском софте 12:07 Технологические риски и управление проектами 14:49 Платежные системы: архитектура и взаимодействие 18:00 Виртуальные машины и их роль в разработке 21:11 Заключение и выводы о будущем Java 26:05 Технологические достижения и их влияние на банковский сектор 29:14 Архитектурные особенности платежных систем 33:26 Опыт работы в Deutsche Bank и Райффайзен 36:39 Качества успешного разработчика в финансовых технологиях 40:24 Понимание Disruptor и его применение в высокопроизводительных системах 45:01 Event Loop и его роль в современных приложениях 52:07 Webflux и реактивное программирование в Java 53:07 Обработка запросов и потоков в реактивных системах 56:18 Проблемы блокирующего кода в реактивных приложениях 01:00:01 Идентификация и управление блокирующими вызовами 01:02:42 Преимущества и недостатки реактивного программирования 01:07:35 Сравнение фреймворков: Micronaut, Quarkus и Spring 01:18:05 Использование GraalVM для нативных образов 01:19:39 Сравнение фреймворков: Armeria и Vert.X 01:27:12 Виртуальные потоки в Java: необходимость и применение 01:39:39 Современный Java стек: выбор технологий и библиотек 01:46:48 Обновление зависимостей и предвидение проблем 01:49:36 Баланс между курсами и реальной практикой 01:50:51 Фундаментальные знания и их важность для разработчиков 01:53:14 Критика современных курсов и их подходов 01:57:10 Непопулярное мнение о Spring Data и Hibernate 02:10:07 Широкий кругозор разработчика и важность изучения других языков Гость - https://www.linkedin.com/in/sergey-petrelevich-72ab893a/ Ссылки: либа для поиска блокировок: https://github.com/reactor/BlockHound рассказ про Disruptor: https://youtu.be/IsGBA9KEtTM?si=fSdka2PDiOgNViYJ мой канал: https://www.youtube.com/@petrelevich рассказ про Armeria: https://youtu.be/6SInub_v_bI?si=wT525f0lWXlRcCMf Кип сейф! 🖖

Les Cast Codeurs Podcast
LCC 315 - les températures ne sont pas déterministes

Les Cast Codeurs Podcast

Play Episode Listen Later Sep 17, 2024 110:08


JVM summit, virtual threads, stacks applicatives, licences, déterminisme et LLMs, quantification, deux outils de l'épisode et bien plus encore. Enregistré le 13 septembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode–315.mp3 News Langages Netflix utilise énormément Java et a rencontré un problème avec les Virtual Thread dans Java 21. Les ingénieurs de Netflix analysent ce problème dans cet article : https://netflixtechblog.com/java–21-virtual-threads-dude-wheres-my-lock–3052540e231d Les threads virtuels peuvent améliorer les performances mais posent des défis. Un problème de locking a été identifié : les threads virtuels se bloquent mutuellement. Cela entraîne des performances dégradées et des instabilités. Netflix travaille à résoudre ces problèmes et à tirer pleinement parti des threads virtuels. Une syntax pour indiquer qu'un type est nullable ou null-restricted arriverait dans Java https://bugs.openjdk.org/browse/JDK–8303099 Foo! interdirait null Foo? indiquerait que null est accepté Foo?[]! serait un tableau non-null de valeur nullable Il y a aussi des idées de syntaxe pour initialiser les tableaux null-restricted JEP: https://openjdk.org/jeps/8303099 Les vidéos du JVM Language Summit 2024 sont en ligne https://www.youtube.com/watch?v=OOPSU4LnKg0&list=PLX8CzqL3ArzUEYnTa6KYORRbP3nhsK0L1 Project Leyden Update Project Babylon - Code Reflection Valhalla - Where Are We? An Opinionated Overview on Static Analysis for Java Rethinking Java String Concatenation Code Reflection in Action - Translating Java to SPIR-V Java in 2024 Type Specialization of Java Generics - What If Casts Have Teeth ? (avec notre Rémi Forax national !) aussi tip or tail pour tout l'ecosysteme quelques liens sur Babylon: Code reflection pour exprimer des langages etranger (SQL) dans Java: https://openjdk.org/projects/babylon/ et sont example en emulation de LINQ https://openjdk.org/projects/babylon/articles/linq Librairies Micronaut sort sa version 4.6 https://micronaut.io/2024/08/26/micronaut-framework–4–6–0-released/ essentiellement une grosse mise à jour de tonnes de modules avec les dernières versions des dépendances Microprofile 7 faire quelques changements et evolution incompatibles https://microprofile.io/2024/08/22/microprofile–7–0-release/#general enleve Metrics et remplace avec Telemetry (metrics, log et tracing) Metrics reste une spec mais standalone Microprofile 7 depende de Jakarta Core profile et ne le package plus Microprofile OpenAPI 4 et Telemetry 2 amenent des changements incompatibles Quarkus 3.14 avec LetsEncrypt et des serialiseurs JAckson sans reflection https://quarkus.io/blog/quarkus–3–14–1-released/ Hibernate ORM 6.6 Serialisateurs JAckson sans reflection installer des certificats letsencrypt simplement (notamment avec la ligne de commande qui aide sympa notamment avec ngrok pour faire un tunnel vers son localhost retropedalage sur @QuarkusTestResource vs @WithTestResource suite aux retour de OOME et lenteur des tests mieux isolés Les logs structurées dans Spring Boot 3.4 https://spring.io/blog/2024/08/23/structured-logging-in-spring-boot–3–4 Les logs structurées (souvent en JSON) vous permettent de les envoyer facilement vers des backends comme Elastic, AWS CloudWatch… Vous pouvez les lier à du reporting et de l'alerting. Spring Boot 3.4 prend en charge la journalisation structurée par défaut. Il prend en charge les formats Elastic Common Schema (ECS) et Logstash, mais il est également possible de l'étendre avec vos propres formats. Vous pouvez également activer la journalisation structurée dans un fichier. Cela peut être utilisé, par exemple, pour imprimer des journaux lisibles par l'homme sur la console et écrire des journaux structurés dans un fichier pour l'ingestion par machine. Infrastructure CockroachDB qui avait une approche Business Software License (source available puis ALS 3 ans apres), passe maintenant en license proprietaire avec source available https://www.cockroachlabs.com/blog/enterprise-license-announcement/ Polyform project offre des licences standardisees selon les besoins de gratuit vs payant https://polyformproject.org/ Cloud Azure fonctions, comment le demarrage a froid est optimisé https://www.infoq.com/articles/azure-functions-cold-starts/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=Cloud fonctions ont une latence naturelle forte toutes les lantences longues ne sont aps impactantes pour le business les demarrages a froid peuvent etre mesures avec les outils du cloud provider donc faites en usage faites des decentilers de latences experience 381 ms cold et 10ms apres tracing pour end to end latence les strategies keep alive pings: reveiller la fonctione a intervalles reguliers pour rester “warm” dans le code de la fonction: initialiser les connections et le chargement des assemblies dans l'initialization configurer dans host.json le batching, desactiver file system logging etc deployer les fonctions as zips reduire al taille du code et des fichiers (qui sont copies sur le serveur froid) sur .net activer ready to run qui aide le JIT compiler instances azure avec plus de CPU et memoire sont plus cher amis baissent le cold start dedicated azure instances pour vos fonctions (pas aprtage avec les autres tenants) ensuite montre des exemples concrets Web Sortie de Vue.js 3.5 https://blog.vuejs.org/posts/vue–3–5 Vue.JS 3.5: Nouveautés clés Optimisations de performance et de mémoire: Réduction significative de la consommation de mémoire (–56%). Amélioration des performances pour les tableaux réactifs de grande taille. Résolution des problèmes de valeurs calculées obsolètes et de fuites de mémoire. Nouvelles fonctionnalités: Reactive Props Destructure: Simplification de la déclaration des props avec des valeurs par défaut. Lazy Hydration: Contrôle de l'hydratation des composants asynchrones. useId(): Génération d'ID uniques stables pour les applications SSR. data-allow-mismatch: Suppression des avertissements de désynchronisation d'hydratation. Améliorations des éléments personnalisés: Prise en charge de configurations d'application, d'API pour accéder à l'hôte et au shadow root, de montage sans Shadow DOM, et de nonce pour les balises. useTemplateRef(): Obtention de références de modèle via l'API useTemplateRef(). Teleport différé: Téléportation de contenu vers des éléments rendus après le montage du composant. onWatcherCleanup(): Enregistrement de callbacks de nettoyage dans les watchers. Data et Intelligence Artificielle On entend souvent parler de Large Language Model quantisés, c'est à dire qu'on utilise par exemple des entiers sur 8 bits plutôt que des floatants sur 32 bits, pour réduire les besoins mémoire des GPU tout en gardant une précision proche de l'original. Cet article explique très visuellement et intuitivement ce processus de quantisation : https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization Guillaume continue de partager ses aventures avec le framework LangChain4j. Comment effectuer de la classification de texte : https://glaforge.dev/posts/2024/07/11/text-classification-with-gemini-and-langchain4j/ en utilisant la classe TextClassification de LangChain4j, qui utilise une approche basée sur les vector embeddings pour comparer des textes similaires en utilisant du few-shot prompting, sous différentes variantes, dans cet autre article : https://glaforge.dev/posts/2024/07/30/sentiment-analysis-with-few-shots-prompting/ et aussi comment faire du multimodal avec LangChain4j (avec le modèle Gemini) pour analyser des textes, des images, mais également des vidéos, du contenu audio, ou bien des fichiers PDFs : https://glaforge.dev/posts/2024/07/25/analyzing-videos-audios-and-pdfs-with-gemini-in-langchain4j/ Pour faire varier la prédictibilité ou la créativité des LLMs, certains hyperparamètres peuvent être ajustés, comme la température, le top-k et le top-p. Mais est-ce que vous savez vraiment comment fonctionnent ces paramètres ? Deux articles très clairs et intuitifs expliquent leur fonctionnement : https://medium.com/google-cloud/is-a-zero-temperature-deterministic-c4a7faef4d20 https://medium.com/google-cloud/beyond-temperature-tuning-llm-output-with-top-k-and-top-p–24c2de5c3b16 la tempoerature va ecraser la probabilite du prochain token mais il reste des variables: approximnation des calculs flottants, stacks differentes effectuants ces choix differemment, que faire en cas d'egalité de probabilité entre deux tokens mais il y a d'atures apporoches de configuiration des reaction du LLM: top-k (qui evite les tokens peu frequents), top-p pour avoir les n des tokens qui totalient p% des probabilités temperature d'abord puis top-k puis top-p explique quoi utiliser quand OSI propose une definition de l'IA open source https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/ gros debats ces derniers mois utilisable pour tous usages sans besoin de permission chercheurs peuvent inspecter les components et etudier comment le system fonctionne systeme modifiable pour tout objectif y compris chager son comportement et paratger avec d'autres avec ou sans modification quelque soit l'usage Definit des niveaux de transparence (donnees d'entranement, code source, poids) Une longue rétrospective de PostgreSQL a des volumes de malades et les problèmes de lock https://ardentperf.com/2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability/ un article pour vous rassurer que vous n'aurez probablement jamais le problème histoire sous forme de post mortem des conseils pour éviter ces falaises Outillage Un premier coup d'oeil à la future notation déclarative de Gradle https://blog.gradle.org/declarative-gradle-first-eap un article qui explique à quoi ressemble cette nouvelle syntaxe déclarative de Gradle (en plus de Groovy et Kotlin) Quelques vidéos montrent le support dans Android Studio, pour le moment, ainsi que dans un outil expérimental, en attendant le support dans tous les IDEs L'idée est d'éviter le scripting et d'avoir vraiment qu'une description de son build Cela devrait améliorer la prise en charge de Gradle dans les IDEs et permettre d'avoir de la complétion rapide, etc c'est moi on on a Maven là? Support de Firefox dans Puppeteer https://hacks.mozilla.org/2024/08/puppeteer-support-for-firefox/ Puppeteer, la bibliothèque d'automatisation de navigateur, supporte désormais officiellement Firefox dès la version 23. Cette avancée permet aux développeurs d'écrire des scripts d'automatisation et d'effectuer des tests de bout en bout sur Chrome et Firefox de manière interchangeable. L'intégration de Firefox dans Puppeteer repose sur WebDriver BiDi, un protocole inter-navigateurs en cours de standardisation au W3C. WebDriver BiDi facilite la prise en charge de plusieurs navigateurs et ouvre la voie à une automatisation plus simple et plus efficace. Les principales fonctionnalités de Puppeteer, telles que la capture de journaux, l'émulation de périphériques, l'interception réseau et le préchargement de scripts, sont désormais disponibles pour Firefox. Mozilla considère WebDriver BiDi comme une étape importante vers une meilleure expérience de test inter-navigateurs. La prise en charge expérimentale de CDP (Chrome DevTools Protocol) dans Firefox sera supprimée fin 2024 au profit de WebDriver BiDi. Bien que Firefox soit officiellement pris en charge, certaines API restent non prises en charge et feront l'objet de travaux futurs. Guillaume a créé une annotation @Retry pour JUnit 5, pour retenter l'exécution d'un test qui est “flaky” https://glaforge.dev/posts/2024/09/01/a-retryable-junit–5-extension/ Guillaume n'avait pas trouvé d'extension par défaut dans JUnit 5 pour remplacer les Retry rules de JUnit 4 Mais sur les réseaux sociaux, une discussion intéressante s'ensuit avec des liens sur des extensions qui implémentent cette approche Comme JUnit Pioneer qui propose plein d'extensions utiles https://junit-pioneer.org/docs/retrying-test/ Ou l'extension rerunner https://github.com/artsok/rerunner-jupiter Arnaud a aussi suggéré la configuration de Maven Surefire pour relancer automatiquement les tests qui ont échoué https://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html la question philosophique est: est-ce que c'est tolerable les tests qui ecouent de façon intermitente Architecture Un ancien fan de GraphQL en a fini avec la technologie GraphQL et réfléchit aux alternatives https://bessey.dev/blog/2024/05/24/why-im-over-graphql/ Problèmes de GraphQL: Sécurité: Attaques d'autorisation Difficulté de limitation de débit Analyse de requêtes malveillantes Performance: Problème N+1 (récupération de données et autorisation) Impact sur la mémoire lors de l'analyse de requêtes invalides Complexité accrue: Couplage entre logique métier et couche de transport Difficulté de maintenance et de tests Solutions envisagées: Adoption d'API REST conformes à OpenAPI 3.0+ Meilleure documentation et sécurité des types Outils pour générer du code client/serveur typé Deux approches de mise en œuvre d'OpenAPI: “Implementation first” (génération de la spécification à partir du code) “Specification first” (génération du code à partir de la spécification) retour interessant de quelqu'un qui n'utilise pas GraphQL au quotidien. C'était des problemes qui devaient etre corrigés avec la maturité de l'ecosysteme et des outils mais ca a montré ces limites pour cette personne. Prensentation de Grace Hoper en 1980 sur le future des ordinateurs. https://youtu.be/AW7ZHpKuqZg?si=w_o5_DtqllVTYZwt c'est fou la modernité de ce qu'elle décrit Des problèmes qu'on a encore aujourd'hui positive leadership Elle décrit l'avantage de systèmes fait de plusieurs ordinateurs récemment declassifié Leader election avec les conditional writes sur les buckets S3/GCS/Azure https://www.morling.dev/blog/leader-election-with-s3-conditional-writes/ L'élection de leader est le processus de choisir un nœud parmi plusieurs pour effectuer une tâche. Traditionnellement, l'élection de leader se fait avec un service de verrouillage distribué comme ZooKeeper. Amazon S3 a récemment ajouté le support des écritures conditionnelles, ce qui permet l'élection de leader sans service séparé. L'algorithme d'élection de leader fonctionne en faisant concourir les nœuds pour créer un fichier de verrouillage dans S3. Le fichier de verrouillage inclut un numéro d'époque, qui est incrémenté à chaque fois qu'un nouveau leader est élu. Les nœuds peuvent déterminer s'ils sont le leader en listant les fichiers de verrouillage et en vérifiant le numéro d'époque. attention il peut y avoir plusieurs leaders élus (horloges qui ont dérivé) donc c'est à gérer aussi Méthodologies Guillaume Laforge interviewé par Sfeir, où il parle de l'importance de la curiosité, du partage, de l'importance de la qualité du code, et parsemé de quelques photos des Cast Codeurs ! https://www.sfeir.dev/success-story/guillaume-laforge-maestro-de-java-et-esthete-du-code-propre/ Sécurité Comment crowdstrike met a genoux windows et de nombreuses entreprises https://next.ink/144464/crowdstrike-donne-des-details-techniques-sur-son-fiasco/ l'incident vient de la mise à jour de la configuration de Falcon l'EDR de crowdstrike https://www.crowdstrike.com/blog/falcon-update-for-windows-hosts-technical-details/ qu'est ce qu'un EDR? Un système Endpoint Detection and Response a pour but de surveiller votre machine ( access réseaux, logs, …) pour detecter des usages non habituels. Cet espion doit interagir avec les couches basses du système (réseau, sockets, logs systems) et se greffe donc au niveau du noyau du système d'exploitation. Il remonte les informations en live à une plateforme qui peut ensuite adapter les réponse en live si l'incident a duré moins de 1h30 coté crowdstrike plus de 8 millions de machines se sont retrouvées hors service bloquées sur le Blue Screen Of Death selon Microsoft https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/ cela n'est pas la première fois et était déjà arrivé il y a quelques mois sur Linux. Comme il s'agissait d'une incompatibilité de kernel il avait été moins important car les services ITs gèrent mieux ces problèmes sous Linux https://stackdiary.com/crowdstrike-took-down-debian-and-rocky-linux-a-few-months-ago-and-no-one-noticed/ Les benchmarks CIS, un pilier pour la sécurité de nos environnements cloud, et pas que ! (Katia HIMEUR TALHI) https://blog.cockpitio.com/security/cis-benchmarks/ Le CIS est un organisme à but non lucratif qui élabore des normes pour améliorer la cybersécurité. Les référentiels CIS sont un ensemble de recommandations et de bonnes pratiques pour sécuriser les systèmes informatiques. Ils peuvent être utilisés pour renforcer la sécurité, se conformer aux réglementations et normaliser les pratiques. Loi, société et organisation Microsoft signe un accord avec OVHCloud pour qu'il arretent leur plaine d'antitrust https://www.politico.eu/article/microsoft-signs-antitrust-truce-with-ovhcloud/ la plainte était en Europe mermet a des clients de plus facilement deployer les solutions Microsoft dans le fournisseur de cloud de leur choix la plainte avait ete posé à l'été 2021 ca rendait faire tourner les solutions MS plus cheres et non competitives vs MS ElasticSearch et Kibana sont de nouveau Open Source, en ajoutant la license AGPL à ses autres licences existantes https://www.elastic.co/fr/blog/elasticsearch-is-open-source-again le marché d'il y a trois ans et maintenant a changé AWS est une bon partenaire le flou Elasticsearch vs le produit d'AWS s'est clarifié donc retour a l'open source via AGPL Affero GPL Elastic n'a jamais cessé de croire en l'open source d'après Shay Banon son fondateur Le changement vers l'AGPL est une option supplémentaire, pas un remplacement d'une des autres licences existantes et juste apres, Elastic annonce des resultants decevants faisant plonger l'action de 25% https://siliconangle.com/2024/08/29/elastic-shares-plunge–25-lower-revenue-projections-amid-slower-customer-commitments/ https://unrollnow.com/status/1832187019235397785 et https://www.elastic.co/pricing/faq/licensing pour un résumé des licenses chez elastic Outils de l'épisode MailMate un client email Markdown et qui gere beaucoup d'emails https://medium.com/@nicfab/mailmate-a-powerful-client-email-for-macos-markdown-integrated-email-composition-e218fe2accf3 Emmanuel l'utilise sur les boites email secondaires un peu lent a demarrer (synchro) et le reste est rapide boites virtuelles (par requete) SpamSieve Que macOS je crois Trippy, un analyseur de réseau https://github.com/fujiapple852/trippy Il regroupe dans une CLI traceroute et ping Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 17 septembre 2024 : We Love Speed - Nantes (France) 17–18 septembre 2024 : Agile en Seine 2024 - Issy-les-Moulineaux (France) 19–20 septembre 2024 : API Platform Conference - Lille (France) & Online 20–21 septembre 2024 : Toulouse Game Dev - Toulouse (France) 25–26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2–4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 3 octobre 2024 : VMUG Montpellier - Montpellier (France) 7–11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 8 octobre 2024 : Red Hat Summit: Connect 2024 - Paris (France) 10 octobre 2024 : Cloud Nord - Lille (France) 10–11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10–11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11–12 octobre 2024 : SecSea2k24 - La Ciotat (France) 15–16 octobre 2024 : Malt Tech Days 2024 - Paris (France) 16 octobre 2024 : DotPy - Paris (France) 16–17 octobre 2024 : NoCode Summit 2024 - Paris (France) 17–18 octobre 2024 : DevFest Nantes - Nantes (France) 17–18 octobre 2024 : DotAI - Paris (France) 30–31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30–31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024–3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13–14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 16–17 novembre 2024 : Capitole Du Libre - Toulouse (France) 20–22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 21 novembre 2024 : Codeurs en Seine - Rouen (France) 27–28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 2–3 décembre 2024 : Tech Rocks Summit - Paris (France) 3 décembre 2024 : Generation AI - Paris (France) 3–5 décembre 2024 : APIdays Paris - Paris (France) 4–5 décembre 2024 : DevOpsRex - Paris (France) 4–5 décembre 2024 : Open Source Experience - Paris (France) 5 décembre 2024 : GraphQL Day Europe - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6–7 février 2025 : Touraine Tech - Tours (France) 3 avril 2025 : DotJS - Paris (France) 16–18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Podlodka Podcast
Podlodka 390 – Устройство JVM

Podlodka Podcast

Play Episode Listen Later Sep 16, 2024 114:20


В выпусках мы уже обсуждали Java, Kotlin, Scala и даже Clojure, но теперь пришло время разобраться с основой популярности этих языков — Java Virtual Machine. Кто сможет лучше всего рассказать о внутреннем устройстве JVM? Конечно, тот, кто сам создавал одну из её реализаций! В этом выпуске вместе с Никитой Липским, инициатором проекта Excelsior JET — JVM с AOT компилятором, мы углубляемся в анатомию JVM, разбираемся с её спецификацией и различными реализациями, обсуждаем особенности оптимизаций, текущие проблемы и тренды в экосистеме JVM. Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях!
 Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodlodkaPodcast Ведущие в выпуске: Евгений Кателла, Катя Петрова, Стас Цыганов, Егор Толстой Полезные ссылки: Никита Липский – Спасение от Jar Hell с помощью Jigsaw Layers https://www.youtube.com/watch?v=KVdZyj7_KVM GeeCON Prague 2019: Nikita Lipsky - Escaping The Jar Hell With Jigsaw Layers https://www.youtube.com/watch?v=UXlASXkMeN0 JVM Anatomy 101 https://www.youtube.com/watch?v=BeMi8K0AFAc Никита Липский — Верификация Java-байткода: когда, как, а может отключить? https://www.youtube.com/watch?v=-OocG7tFIOQ Никита Липский — Модули Java 9. Почему не OSGi? https://www.youtube.com/watch?v=E3A6Z02TIjg&t=1374s Полный список всех остальных докладов Никиты https://habr.com/ru/companies/jugru/articles/329728/

telegram java kotlin jvm java virtual machine osgi
airhacks.fm podcast with adam bien
Project Valhalla: Value Types, Nullability and Float16

airhacks.fm podcast with adam bien

Play Episode Listen Later Jul 14, 2024 63:01


An airhacks.fm conversation with Paul Sandoz (@paulsandoz) about: Project Valhalla's origins and goals, value types vs reference types, heap and stack flattening optimizations, Value objects and data transfer objects, nullability constraints, enums and values, implicit constructability, potential performance gains, challenges in retrofitting value types into Java, implications for numeric types and operator overloading, connections to Vector API and Project Panama, impact on JVM languages like Scala, timeline and development process for Project Valhalla, potential for improving Java's competitiveness in areas like machine learning, challenges in growing the Java language in an extensible way, considerations around backwards compatibility, Paul Sandoz's role in the project and future directions for Java Paul Sandoz on twitter: @paulsandoz

The Unruly Muse
Regret

The Unruly Muse

Play Episode Listen Later Jun 21, 2024 37:47


Song 1: You Told, composed and performed by John V. Modaff.Poem 1: “The Wicked Stepmother Tells Her Story,” published in bosque journal and later in We Were Meant to Carry Water as “Frail Cradle.” https://tinacarlsonpoetry.com/Fiction: “Cautionary Tale” by Lynn C. Miller, author of, most recently, The Lost Archive: Stories (2023), from a novel in progress.  www.lynncmiller.comFeed the Cat Break: “The Rear View” by JVM.Poem 2: “I Could Taste It” by Marjorie Saiser, whose book, The Track the Whales Make won the High Plains Award. Saiser's Losing the Ring in the River (University of New Mexico Press) won the Willa Award for Poetry in 2014. www.poetmarge.comSong 2: No Fly Zone composed and sung by JVMEpisode artwork by Lynda Miller Show theme and incidental music by John V. Modaff, BMIThe Unruly Muse is Recorded in Albuquerque, NM and Morehead, KY Produced at The Creek Studio NEXT UP: Episode 39, “What is Healing?”    Thank You to our listeners all over the world. Please tell a friend about the podcast! Lynn & John

Syntax - Tasty Web Development Treats
784: Logging × Blogging × Testing × Freelancing

Syntax - Tasty Web Development Treats

Play Episode Listen Later Jun 19, 2024 55:36


In this Potluck episode, Scott and Wes tackle listener questions on modern blogging, website environmental impact, and using LangChain with LLMs. They also cover CSS hyphens, unit vs. integration testing, and balancing web development with new parenthood. Show Notes 00:00 Welcome to Syntax! 00:13 How to submit a question for future episodes. Potluck Questions. 02:46 Brought to you by Sentry.io. 03:21 Logging from a site. 08:39 Blogging in 2024. 11:49 Sharing website environmental data. Green Web Foundation. Website Carbon Calculator. Syntax Site Results. Scott's Site Results. 17:38 Using LangChain when working with LLMs. 21:03 CSS Hyphens and Overflow-Wrap. Hyphens Browser Compatibility. Overflow-Wrap. 25:52 Similarities between WASM, JVM and .NET. 27:25 Writing unit testing and integration testing. 32:00 How can new parents stay current on web development trends? 34:47 Working globally as a freelance developer. 37:26 Scott's audio setup. Why audio interfaces have DSP built in. ChaseBliss Pedal. 43:04 UI libraries for synth/audio plugins. 44:02 CSS module scripts. CSS Modules in CSS Module Scripts. 48:39 Sick Picks + Shameless Plugs. Sick Picks Scott: Deep Cover Podcast. Wes: Pressure Washer Surface Cleaner. Shameless Plugs Wes: Syntax.fm/videos. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

The Ride with JMV Podcast
Best Of JMV 4-17-24

The Ride with JMV Podcast

Play Episode Listen Later Apr 17, 2024 81:43


00:00 - 15:39 - JMV is back and he kicks off the show by talking to Aaron Nesmith of the Indiana Pacers! Aaron and JMV talk about the upcoming series against the Bucks, and the Pacers success against Milwaukee this year. They also talk about what they expect the environment at Gainbridge will be like when the Bucks come to town. 15:40 - 28:51 - Lin Dunn, the GM of the Indiana Fever, joins JMV! Lin and JMV talk about the whirlwind since the Fever drafted Caitlin Clark first overall. They talk about the incredible skill that Clark will bring to the Fever, specifically her passing. They also discuss the increased attention on the franchise, and what that could mean for women's basketball. JMV and Lin dive into the increased expectations for the Fever with Clark on the roster.  28:52 - 51:58 - Jeremiah Johnson of Bally Sports Indiana joins the show! Jeremiah and JMV discuss the upcoming Pacers-Bucks playoff series! They talk about the games the Pacers played against Milwaukee in the regular season, and if the Pacers can take anything away from those games. They also talk about the health of Giannis, and when might make his series debut. Jeremiah also discusses the Indiana Fever taking Caitlin Clark, and the impact she could have on the franchise and the city.  51:59 - 1:21:43 - Kevin Bowen from the morning show The Wake Up Call w/KB & Andy joins the show! KV and JMV talk about Caitlin Clark's introductory press conference! They also preview the Pacers-Bucks series, and the impact the injury to Giannis could have. They also discuss how the Pacers matchup against Milwaukee, especially if the series goes a full seven games. Kevin and JVM also discuss the Colts, and what they'll do with the 15th-overall pick in the NFL Draft! See omnystudio.com/listener for privacy information.

The Ride with JMV Podcast
Full Show: Aaron Nesmith, Lin Dunn, Jeremiah Johnson and Kevin Bowen Join!

The Ride with JMV Podcast

Play Episode Listen Later Apr 17, 2024 130:36


00:00 – 28:03 – JMV is back and he kicks off the show by talking to Aaron Nesmith of the Indiana Pacers! Aaron and JMV talk about the upcoming series against the Bucks, and the Pacers success against Milwaukee this year. They also talk about what they expect the environment at Gainbridge will be like when the Bucks come to town. JMV wraps up the conversation with Aaron, and then talks about Caitlin Clark, who was introduced as a member of the Fever today. He also discusses the NFL Draft, and who the Colts might take.  28:04 – 38:46 – JMV keeps talking about the NFL Draft, and what Chris Ballard and the Colts should do with the 15th-overall pick.   38:47 – 42:32 – JMV wraps up the first hour of the show!  42:33 – 1:08:10 – Jeremiah Johnson of Bally Sports Indiana joins the show! Jeremiah and JMV discuss the upcoming Pacers-Bucks playoff series! They talk about the games the Pacers played against Milwaukee in the regular season, and if the Pacers can take anything away from those games. They also talk about the health of Giannis, and when might make his series debut. Jeremiah also discusses the Indiana Fever taking Caitlin Clark, and the impact she could have on the franchise and the city.   1:08:11 – 1:23:33 – Lin Dunn, the GM of the Indiana Fever, joins JMV! Lin and JMV talk about the whirlwind since the Fever drafted Caitlin Clark first overall. They talk about the incredible skill that Clark will bring to the Fever, specifically her passing. They also discuss the increased attention on the franchise, and what that could mean for women's basketball. JMV and Lin dive into the increased expectations for the Fever with Clark on the roster.   1:23:34 – 1:56:54 – Kevin Bowen from the morning show The Wake Up Call w/KB & Andy joins the show! KV and JMV talk about Caitlin Clark's introductory press conference! They also preview the Pacers-Bucks series, and the impact the injury to Giannis could have. They also discuss how the Pacers matchup against Milwaukee, especially if the series goes a full seven games. Kevin and JVM also discuss the Colts, and what they'll do with the 15th-overall pick in the NFL Draft!   1:56:55 – 2:04:36 – The show keeps rolling with John talking about if Caitlin Clark will be involved in the Indy 500 in any capacity.   2:04:37 – 2:10:36 – JMV wraps up another edition of the show! See omnystudio.com/listener for privacy information.