POPULARITY
Gros épisode qui couvre un large spectre de sujets : Java, Scala, Micronaut, NodeJS, l'IA et la compétence des développeurs, le sampling dans les LLMs, les DTO, le vibe coding, les changements chez Broadcom et Red Hat ainsi que plusieurs nouvelles sur les licences open source. Enregistré le 7 mai 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-325.mp3 ou en vidéo sur YouTube. News Langages A l'occasion de JavaOne et du lancement de Java 24, Oracle lance un nouveau site avec des ressources vidéo pour apprendre le langage https://learn.java/ site plutôt à destination des débutants et des enseignants couvre la syntaxe aussi, y compris les ajouts plus récents comme les records ou le pattern matching c'est pas le site le plus trendy du monde. Martin Odersky partage un long article sur l'état de l'écosystème Scala et les évolutions du language https://www.scala-lang.org/blog/2025/03/24/evolving-scala.html Stabilité et besoin d'évolution : Scala maintient sa position (~14ème mondial) avec des bases techniques solides, mais doit évoluer face à la concurrence pour rester pertinent. Axes prioritaires : L'évolution se concentre sur l'amélioration du duo sécurité/convivialité, le polissage du langage (suppression des “rugosités”) et la simplification pour les débutants. Innovation continue : Geler les fonctionnalités est exclu ; l'innovation est clé pour la valeur de Scala. Le langage doit rester généraliste et ne pas se lier à un framework spécifique. Défis et progrès : L'outillage (IDE, outils de build comme sbt, scala-cli, Mill) et la facilité d'apprentissage de l'écosystème sont des points d'attention, avec des améliorations en cours (partenariat pédagogique, plateformes simples). Des strings encore plus rapides ! https://inside.java/2025/05/01/strings-just-got-faster/ Dans JDK 25, la performance de la fonction String::hashCode a été améliorée pour être principalement constant foldable. Cela signifie que si les chaînes de caractères sont utilisées comme clés dans une Map statique et immuable, des gains de performance significatifs sont probables. L'amélioration repose sur l'annotation interne @Stable appliquée au champ privé String.hash. Cette annotation permet à la machine virtuelle de lire la valeur du hash une seule fois et de la considérer comme constante si elle n'est pas la valeur par défaut (zéro). Par conséquent, l'opération String::hashCode peut être remplacée par la valeur de hash connue, optimisant ainsi les lookups dans les Map immuables. Un cas limite est celui où le code de hachage de la chaîne est zéro, auquel cas l'optimisation ne fonctionne pas (par exemple, pour la chaîne vide “”). Bien que l'annotation @Stable soit interne au JDK, un nouveau JEP (JEP 502: Stable Values (Preview)) est en cours de développement pour permettre aux utilisateurs de bénéficier indirectement de fonctionnalités similaires. AtomicHash, une implémentation Java d'une HashMap qui est thread-safe, atomique et non-bloquante https://github.com/arxila/atomichash implémenté sous forme de version immutable de Concurrent Hash Trie Librairies Sortie de Micronaut 4.8.0 https://micronaut.io/2025/04/01/micronaut-framework-4-8-0-released/ Mise à jour de la BOM (Bill of Materials) : La version 4.8.0 met à jour la BOM de la plateforme Micronaut. Améliorations de Micronaut Core : Intégration de Micronaut SourceGen pour la génération interne de métadonnées et d'expressions bytecode. Nombreuses améliorations dans Micronaut SourceGen. Ajout du traçage de l'injection de dépendances pour faciliter le débogage au démarrage et à la création des beans. Nouveau membre definitionType dans l'annotation @Client pour faciliter le partage d'interfaces entre client et serveur. Support de la fusion dans les Bean Mappers via l'annotation @Mapping. Nouvelle liveness probe détectant les threads bloqués (deadlocked) via ThreadMXBean. Intégration Kubernetes améliorée : Mise à jour du client Java Kubernetes vers la version 22.0.1. Ajout du module Micronaut Kubernetes Client OpenAPI, offrant une alternative au client officiel avec moins de dépendances, une configuration unifiée, le support des filtres et la compatibilité Native Image. Introduction d'un nouveau runtime serveur basé sur le serveur HTTP intégré de Java, permettant de créer des applications sans dépendances serveur externes. Ajout dans Micronaut Micrometer d'un module pour instrumenter les sources de données (traces et métriques). Ajout de la condition condition dans l'annotation @MetricOptions pour contrôler l'activation des métriques via une expression. Support des Consul watches dans Micronaut Discovery Client pour détecter les changements de configuration distribuée. Possibilité de générer du code source à partir d'un schéma JSON via les plugins de build (Gradle et Maven). Web Node v24.0.0 passe en version Current: https://nodejs.org/en/blog/release/v24.0.0 Mise à jour du moteur V8 vers la version 13.6 : intégration de nouvelles fonctionnalités JavaScript telles que Float16Array, la gestion explicite des ressources (using), RegExp.escape, WebAssembly Memory64 et Error.isError. npm 11 inclus : améliorations en termes de performance, de sécurité et de compatibilité avec les packages JavaScript modernes. Changement de compilateur pour Windows : abandon de MSVC au profit de ClangCL pour la compilation de Node.js sur Windows. AsyncLocalStorage utilise désormais AsyncContextFrame par défaut : offrant une gestion plus efficace du contexte asynchrone. URLPattern disponible globalement : plus besoin d'importer explicitement cette API pour effectuer des correspondances d'URL. Améliorations du modèle de permissions : le flag expérimental --experimental-permission devient --permission, signalant une stabilité accrue de cette fonctionnalité. Améliorations du test runner : les sous-tests sont désormais attendus automatiquement, simplifiant l'écriture des tests et réduisant les erreurs liées aux promesses non gérées. Intégration d'Undici 7 : amélioration des capacités du client HTTP avec de meilleures performances et un support étendu des fonctionnalités HTTP modernes. Dépréciations et suppressions : Dépréciation de url.parse() au profit de l'API WHATWG URL. Suppression de tls.createSecurePair. Dépréciation de SlowBuffer. Dépréciation de l'instanciation de REPL sans new. Dépréciation de l'utilisation des classes Zlib sans new. Dépréciation du passage de args à spawn et execFile dans child_process. Node.js 24 est actuellement la version “Current” et deviendra une version LTS en octobre 2025. Il est recommandé de tester cette version pour évaluer son impact sur vos applications. Data et Intelligence Artificielle Apprendre à coder reste crucial et l'IA est là pour venir en aide : https://kyrylo.org/software/2025/03/27/learn-to-code-ignore-ai-then-use-ai-to-code-even-better.html Apprendre à coder reste essentiel malgré l'IA. L'IA peut assister la programmation. Une solide base est cruciale pour comprendre et contrôler le code. Cela permet d'éviter la dépendance à l'IA. Cela réduit le risque de remplacement par des outils d'IA accessibles à tous. L'IA est un outil, pas un substitut à la maîtrise des fondamentaux. Super article de Anthropic qui essaie de comprendre comment fonctionne la “pensée” des LLMs https://www.anthropic.com/research/tracing-thoughts-language-model Effet boîte noire : Stratégies internes des IA (Claude) opaques aux développeurs et utilisateurs. Objectif : Comprendre le “raisonnement” interne pour vérifier capacités et intentions. Méthode : Inspiration neurosciences, développement d'un “microscope IA” (regarder quels circuits neuronaux s'activent). Technique : Identification de concepts (“features”) et de “circuits” internes. Multilinguisme : Indice d'un “langage de pensée” conceptuel commun à toutes les langues avant de traduire dans une langue particulière. Planification : Capacité à anticiper (ex: rimes en poésie), pas seulement de la génération mot par mot (token par token). Raisonnement non fidèle : Peut fabriquer des arguments plausibles (“bullshitting”) pour une conclusion donnée. Logique multi-étapes : Combine des faits distincts, ne se contente pas de mémoriser. Hallucinations : Refus par défaut ; réponse si “connaissance” active, sinon risque d'hallucination si erreur. “Jailbreaks” : Tension entre cohérence grammaticale (pousse à continuer) et sécurité (devrait refuser). Bilan : Méthodes limitées mais prometteuses pour la transparence et la fiabilité de l'IA. Le “S” dans MCP veut dire Securité (ou pas !) https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands-for-security-91407b33ed6b La spécification MCP pour permettre aux LLMs d'avoir accès à divers outils et fonctions a peut-être été adoptée un peu rapidement, alors qu'elle n'était pas encore prête niveau sécurité L'article liste 4 types d'attaques possibles : vulnérabilité d'injection de commandes attaque d'empoisonnement d'outils redéfinition silencieuse de l'outil le shadowing d'outils inter-serveurs Pour l'instant, MCP n'est pas sécurisé : Pas de standard d'authentification Pas de chiffrement de contexte Pas de vérification d'intégrité des outils Basé sur l'article de InvariantLabs https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks Sortie Infinispan 15.2 - pre rolling upgrades 16.0 https://infinispan.org/blog/2025/03/27/infinispan-15-2 Support de Redis JSON + scripts Lua Métriques JVM désactivables Nouvelle console (PatternFly 6) Docs améliorées (métriques + logs) JDK 17 min, support JDK 24 Fin du serveur natif (performances) Guillaume montre comment développer un serveur MCP HTTP Server Sent Events avec l'implémentation de référence Java et LangChain4j https://glaforge.dev/posts/2025/04/04/mcp-client-and-server-with-java-mcp-sdk-and-langchain4j/ Développé en Java, avec l'implémentation de référence qui est aussi à la base de l'implémentation dans Spring Boot (mais indépendant de Spring) Le serveur MCP est exposé sous forme de servlet dans Jetty Le client MCP lui, est développé avec le module MCP de LangChain4j c'est semi independant de Spring dans le sens où c'est dépendant de Reactor et de ses interface. il y a une conversation sur le github d'anthropic pour trouver une solution, mais cela ne parait pas simple. Les fallacies derrière la citation “AI won't replace you, but humans using AI will” https://platforms.substack.com/cp/161356485 La fallacie de l'automatisation vs. l'augmentation : Elle se concentre sur l'amélioration des tâches existantes avec l'IA au lieu de considérer le changement de la valeur de ces tâches dans un nouveau système. La fallacie des gains de productivité : L'augmentation de la productivité ne se traduit pas toujours par plus de valeur pour les travailleurs, car la valeur créée peut être capturée ailleurs dans le système. La fallacie des emplois statiques : Les emplois sont des constructions organisationnelles qui peuvent être redéfinies par l'IA, rendant les rôles traditionnels obsolètes. La fallacie de la compétition “moi vs. quelqu'un utilisant l'IA” : La concurrence évolue lorsque l'IA modifie les contraintes fondamentales d'un secteur, rendant les compétences existantes moins pertinentes. La fallacie de la continuité du flux de travail : L'IA peut entraîner une réimagination complète des flux de travail, éliminant le besoin de certaines compétences. La fallacie des outils neutres : Les outils d'IA ne sont pas neutres et peuvent redistribuer le pouvoir organisationnel en changeant la façon dont les décisions sont prises et exécutées. La fallacie du salaire stable : Le maintien d'un emploi ne garantit pas un salaire stable, car la valeur du travail peut diminuer avec l'augmentation des capacités de l'IA. La fallacie de l'entreprise stable : L'intégration de l'IA nécessite une restructuration de l'entreprise et ne se fait pas dans un vide organisationnel. Comprendre le “sampling” dans les LLMs https://rentry.co/samplers Explique pourquoi les LLMs utilisent des tokens Les différentes méthodes de “sampling” : càd de choix de tokens Les hyperparamètres comme la température, top-p, et leur influence réciproque Les algorithmes de tokenisation comme Byte Pair Encoding et SentencePiece. Un de moins … OpenAI va racheter Windsurf pour 3 milliards de dollars. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion l'accord n'est pas encore finalisé Windsurf était valorisé à 1,25 milliards l'an dernier et OpenAI a levé 40 milliards dernièrement portant sa valeur à 300 milliards Le but pour OpenAI est de rentrer dans le monde des assistants de code pour lesquels ils sont aujourd'hui absent Docker desktop se met à l'IA… ? Une nouvelle fonctionnalité dans docker desktop 4.4 sur macos: Docker Model Runner https://dev.to/docker/run-genai-models-locally-with-docker-model-runner-5elb Permet de faire tourner des modèles nativement en local ( https://docs.docker.com/model-runner/ ) mais aussi des serveurs MCP ( https://docs.docker.com/ai/mcp-catalog-and-toolkit/ ) Outillage Jetbrains défend la suppression des commentaires négatifs sur son assistant IA https://devclass.com/2025/04/30/jetbrains-defends-removal-of-negative-reviews-for-unpopular-ai-assistant/?td=rt-3a L'IA Assistant de JetBrains, lancée en juillet 2023, a été téléchargée plus de 22 millions de fois mais n'est notée que 2,3 sur 5. Des utilisateurs ont remarqué que certaines critiques négatives étaient supprimées, ce qui a provoqué une réaction négative sur les réseaux sociaux. Un employé de JetBrains a expliqué que les critiques ont été supprimées soit parce qu'elles mentionnaient des problèmes déjà résolus, soit parce qu'elles violaient leur politique concernant les “grossièretés, etc.” L'entreprise a reconnu qu'elle aurait pu mieux gérer la situation, un représentant déclarant : “Supprimer plusieurs critiques d'un coup sans préavis semblait suspect. Nous aurions dû au moins publier un avis et fournir plus de détails aux auteurs.” Parmi les problèmes de l'IA Assistant signalés par les utilisateurs figurent : un support limité pour les fournisseurs de modèles tiers, une latence notable, des ralentissements fréquents, des fonctionnalités principales verrouillées aux services cloud de JetBrains, une expérience utilisateur incohérente et une documentation insuffisante. Une plainte courante est que l'IA Assistant s'installe sans permission. Un utilisateur sur Reddit l'a qualifié de “plugin agaçant qui s'auto-répare/se réinstalle comme un phénix”. JetBrains a récemment introduit un niveau gratuit et un nouvel agent IA appelé Junie, destiné à fonctionner parallèlement à l'IA Assistant, probablement en réponse à la concurrence entre fournisseurs. Mais il est plus char a faire tourner. La société s'est engagée à explorer de nouvelles approches pour traiter les mises à jour majeures différemment et envisage d'implémenter des critiques par version ou de marquer les critiques comme “Résolues” avec des liens vers les problèmes correspondants au lieu de les supprimer. Contrairement à des concurrents comme Microsoft, AWS ou Google, JetBrains commercialise uniquement des outils et services de développement et ne dispose pas d'une activité cloud distincte sur laquelle s'appuyer. Vos images de README et fichiers Markdown compatibles pour le dark mode de GitHub: https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/ Seulement quelques lignes de pure HTML pour le faire Architecture Alors, les DTOs, c'est bien ou c'est pas bien ? https://codeopinion.com/dtos-mapping-the-good-the-bad-and-the-excessive/ Utilité des DTOs : Les DTOs servent à transférer des données entre les différentes couches d'une application, en mappant souvent les données entre différentes représentations (par exemple, entre la base de données et l'interface utilisateur). Surutilisation fréquente : L'article souligne que les DTOs sont souvent utilisés de manière excessive, notamment pour créer des API HTTP qui ne font que refléter les entités de la base de données, manquant ainsi l'opportunité de composer des données plus riches. Vraie valeur : La valeur réelle des DTOs réside dans la gestion du couplage entre les couches et la composition de données provenant de sources multiples en formes optimisées pour des cas d'utilisation spécifiques. Découplage : Il est suggéré d'utiliser les DTOs pour découpler les modèles de données internes des contrats externes (comme les API), ce qui permet une évolution et une gestion des versions indépendantes. Exemple avec CQRS : Dans le cadre de CQRS (Command Query Responsibility Segregation), les réponses aux requêtes (queries) agissent comme des DTOs spécifiquement adaptés aux besoins de l'interface utilisateur, pouvant inclure des données de diverses sources. Protection des données internes : Les DTOs aident à distinguer et protéger les modèles de données internes (privés) des changements externes (publics). Éviter l'excès : L'auteur met en garde contre les couches de mapping excessives (mapper un DTO vers un autre DTO) qui n'apportent pas de valeur ajoutée. Création ciblée : Il est conseillé de ne créer des DTOs que lorsqu'ils résolvent des problèmes concrets, tels que la gestion du couplage ou la facilitation de la composition de données. Méthodologies Même Guillaume se met au “vibe coding” https://glaforge.dev/posts/2025/05/02/vibe-coding-an-mcp-server-with-micronaut-and-gemini/ Selon Andrey Karpathy, c'est le fait de POC-er un proto, une appli jetable du weekend https://x.com/karpathy/status/1886192184808149383 Mais Simon Willison s'insurge que certains confondent coder avec l'assistance de l'IA avec le vibe coding https://simonwillison.net/2025/May/1/not-vibe-coding/ Guillaume c'est ici amusé à développer un serveur MCP avec Micronaut, en utilisant Gemini, l'IA de Google. Contrairement à Quarkus ou Spring Boot, Micronaut n'a pas encore de module ou de support spécifique pour faciliter la création de serveur MCP Sécurité Une faille de sécurité 10/10 sur Tomcat https://www.it-connect.fr/apache-tomcat-cette-faille-activement-exploitee-seulement-30-heures-apres-sa-divulgation-patchez/ Une faille de sécurité critique (CVE-2025-24813) affecte Apache Tomcat, permettant l'exécution de code à distance Cette vulnérabilité est activement exploitée seulement 30 heures après sa divulgation du 10 mars 2025 L'attaque ne nécessite aucune authentification et est particulièrement simple à exécuter Elle utilise une requête PUT avec une charge utile Java sérialisée encodée en base64, suivie d'une requête GET L'encodage en base64 permet de contourner la plupart des filtres de sécurité Les serveurs vulnérables utilisent un stockage de session basé sur des fichiers (configuration répandue) Les versions affectées sont : 11.0.0-M1 à 11.0.2, 10.1.0-M1 à 10.1.34, et 9.0.0.M1 à 9.0.98 Les mises à jour recommandées sont : 11.0.3+, 10.1.35+ et 9.0.99+ Les experts prévoient des attaques plus sophistiquées dans les prochaines phases d'exploitation (upload de config ou jsp) Sécurisation d'un serveur ssh https://ittavern.com/ssh-server-hardening/ un article qui liste les configurations clés pour sécuriser un serveur SSH par exemple, enlever password authentigfication, changer de port, desactiver le login root, forcer le protocol ssh 2, certains que je ne connaissais pas comme MaxStartups qui limite le nombre de connections non authentifiées concurrentes Port knocking est une technique utile mais demande une approche cliente consciente du protocol Oracle admet que les identités IAM de ses clients ont leaké https://www.theregister.com/2025/04/08/oracle_cloud_compromised/ Oracle a confirmé à certains clients que son cloud public a été compromis, alors que l'entreprise avait précédemment nié toute intrusion. Un pirate informatique a revendiqué avoir piraté deux serveurs d'authentification d'Oracle et volé environ six millions d'enregistrements, incluant des clés de sécurité privées, des identifiants chiffrés et des entrées LDAP. La faille exploitée serait la vulnérabilité CVE-2021-35587 dans Oracle Access Manager, qu'Oracle n'avait pas corrigée sur ses propres systèmes. Le pirate a créé un fichier texte début mars sur login.us2.oraclecloud.com contenant son adresse email pour prouver son accès. Selon Oracle, un ancien serveur contenant des données vieilles de huit ans aurait été compromis, mais un client affirme que des données de connexion aussi récentes que 2024 ont été dérobées. Oracle fait face à un procès au Texas concernant cette violation de données. Cette intrusion est distincte d'une autre attaque contre Oracle Health, sur laquelle l'entreprise refuse de commenter. Oracle pourrait faire face à des sanctions sous le RGPD européen qui exige la notification des parties affectées dans les 72 heures suivant la découverte d'une fuite de données. Le comportement d'Oracle consistant à nier puis à admettre discrètement l'intrusion est inhabituel en 2025 et pourrait mener à d'autres actions en justice collectives. Une GitHub action très populaire compromise https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised Compromission de l'action tj-actions/changed-files : En mars 2025, une action GitHub très utilisée (tj-actions/changed-files) a été compromise. Des versions modifiées de l'action ont exposé des secrets CI/CD dans les logs de build. Méthode d'attaque : Un PAT compromis a permis de rediriger plusieurs tags de version vers un commit contenant du code malveillant. Détails du code malveillant : Le code injecté exécutait une fonction Node.js encodée en base64, qui téléchargeait un script Python. Ce script parcourait la mémoire du runner GitHub à la recherche de secrets (tokens, clés…) et les exposait dans les logs. Dans certains cas, les données étaient aussi envoyées via une requête réseau. Période d'exposition : Les versions compromises étaient actives entre le 12 et le 15 mars 2025. Tout dépôt, particulièrement ceux publiques, ayant utilisé l'action pendant cette période doit être considéré comme potentiellement exposé. Détection : L'activité malveillante a été repérée par l'analyse des comportements inhabituels pendant l'exécution des workflows, comme des connexions réseau inattendues. Réaction : GitHub a supprimé l'action compromise, qui a ensuite été nettoyée. Impact potentiel : Tous les secrets apparaissant dans les logs doivent être considérés comme compromis, même dans les dépôts privés, et régénérés sans délai. Loi, société et organisation Les startup the YCombinateur ont les plus fortes croissances de leur histoire https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html Les entreprises en phase de démarrage à Silicon Valley connaissent une croissance significative grâce à l'intelligence artificielle. Le PDG de Y Combinator, Garry Tan, affirme que l'ensemble des startups de la dernière cohorte a connu une croissance hebdomadaire de 10% pendant neuf mois. L'IA permet aux développeurs d'automatiser des tâches répétitives et de générer du code grâce aux grands modèles de langage. Pour environ 25% des startups actuelles de YC, 95% de leur code a été écrit par l'IA. Cette révolution permet aux entreprises de se développer avec moins de personnel - certaines atteignant 10 millions de dollars de revenus avec moins de 10 employés. La mentalité de “croissance à tout prix” a été remplacée par un renouveau d'intérêt pour la rentabilité. Environ 80% des entreprises présentées lors du “demo day” étaient centrées sur l'IA, avec quelques startups en robotique et semi-conducteurs. Y Combinator investit 500 000 dollars dans les startups en échange d'une participation au capital, suivi d'un programme de trois mois. Red Hat middleware (ex-jboss) rejoint IBM https://markclittle.blogspot.com/2025/03/red-hat-middleware-moving-to-ibm.html Les activités Middleware de Red Hat (incluant JBoss, Quarkus, etc.) vont être transférées vers IBM, dans l'unité dédiée à la sécurité des données, à l'IAM et aux runtimes. Ce changement découle d'une décision stratégique de Red Hat de se concentrer davantage sur le cloud hybride et l'intelligence artificielle. Mark Little explique que ce transfert était devenu inévitable, Red Hat ayant réduit ses investissements dans le Middleware ces dernières années. L'intégration vise à renforcer l'innovation autour de Java en réunissant les efforts de Red Hat et IBM sur ce sujet. Les produits Middleware resteront open source et les clients continueront à bénéficier du support habituel sans changement. Mark Little affirme que des projets comme Quarkus continueront à être soutenus et que cette évolution est bénéfique pour la communauté Java. Un an de commonhaus https://www.commonhaus.org/activity/253.html un an, démarré sur les communautés qu'ils connaissaient bien maintenant 14 projets et put en accepter plus confiance, gouvernance legère et proteger le futur des projets automatisation de l'administratif, stabiilité sans complexité, les developpeurs au centre du processus de décision ils ont besoins de members et supporters (financiers) ils veulent accueillir des projets au delà de ceux du cercles des Java Champions Spring Cloud Data Flow devient un produit commercial et ne sera plus maintenu en open source https://spring.io/blog/2025/04/21/spring-cloud-data-flow-commercial Peut-être sous l'influence de Broadcom, Spring se met à mettre en mode propriétaire des composants du portefeuille Spring ils disent que peu de gens l'utilisaent en mode OSS et la majorité venait d'un usage dans la plateforme Tanzu Maintenir en open source le coutent du temps qu'ils son't pas sur ces projets. La CNCF protège le projet NATS, dans la fondation depuis 2018, vu que la société Synadia qui y contribue souhaitait reprendre le contrôle du projet https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-integrity-of-open-source-cncfs-commitment-to-the-community/ CNCF : Protège projets OS, gouvernance neutre. Synadia vs CNCF : Veut retirer NATS, licence non-OS (BUSL). CNCF : Accuse Synadia de “claw back” (reprise illégitime). Revendications Synadia : Domaine nats.io, orga GitHub. Marque NATS : Synadia n'a pas transféré (promesse rompue malgré aide CNCF). Contestation Synadia : Juge règles CNCF “trop vagues”. Vote interne : Mainteneurs Synadia votent sortie CNCF (sans communauté). Support CNCF : Investissement majeur ($ audits, légal), succès communautaire (>700 orgs). Avenir NATS (CNCF) : Maintien sous Apache 2.0, gouvernance ouverte. Actions CNCF : Health check, appel mainteneurs, annulation marque Synadia, rejet demandes. Mais finalement il semble y avoir un bon dénouement : https://www.cncf.io/announcements/2025/05/01/cncf-and-synadia-align-on-securing-the-future-of-the-nats-io-project/ Accord pour l'avenir de NATS.io : La Cloud Native Computing Foundation (CNCF) et Synadia ont conclu un accord pour sécuriser le futur du projet NATS.io. Transfert des marques NATS : Synadia va céder ses deux enregistrements de marque NATS à la Linux Foundation afin de renforcer la gouvernance ouverte du projet. Maintien au sein de la CNCF : L'infrastructure et les actifs du projet NATS resteront sous l'égide de la CNCF, garantissant ainsi sa stabilité à long terme et son développement en open source sous licence Apache-2.0. Reconnaissance et engagement : La Linux Foundation, par la voix de Todd Moore, reconnaît les contributions de Synadia et son soutien continu. Derek Collison, PDG de Synadia, réaffirme l'engagement de son entreprise envers NATS et la collaboration avec la Linux Foundation et la CNCF. Adoption et soutien communautaire : NATS est largement adopté et considéré comme une infrastructure critique. Il bénéficie d'un fort soutien de la communauté pour sa nature open source et l'implication continue de Synadia. Finalement, Redis revient vers une licence open source OSI, avec la AGPL https://foojay.io/today/redis-is-now-available-under-the-agplv3-open-source-license/ Redis passe à la licence open source AGPLv3 pour contrer l'exploitation par les fournisseurs cloud sans contribution. Le passage précédent à la licence SSPL avait nui à la relation avec la communauté open source. Salvatore Sanfilippo (antirez) est revenu chez Redis. Redis 8 adopte la licence AGPL, intègre les fonctionnalités de Redis Stack (JSON, Time Series, etc.) et introduit les “vector sets” (le support de calcul vectoriel développé par Salvatore). Ces changements visent à renforcer Redis en tant que plateforme appréciée des développeurs, conformément à la vision initiale de Salvatore. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12 juin 2025 : Positive Design Days - Strasbourg (France) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : Devfest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Juraci Paixão Kröhling, OpenTelemetry governing board member, joins Dash0's Mirko Novakovic for a deep dive into the evolution of observability. From the JBoss and Jaeger early days to his new startup OllyGarden, Juraci shares how OpenTelemetry is reshaping the way we monitor distributed systems. They explore the challenges of tracing adoption, the semantics around spans and transactions, and why logs are still dominant (but perhaps not for long). Juraci also details his journey from engineer to entrepreneur and the future of observability in an AI-first world.
An airhacks.fm conversation with Burr Sutter (@burrsutter) about: first computer: IBM PS/2 386SX funded by grandparents' Kona coffee sales, early passion for programming and problem-solving, self-taught C programming, database engine development as a student, transition from theater aspirations to computer science, work with Progress 4GL and Silverstream, shift to .net development, joining JBoss and Red Hat through acquisition, Mark Fleury's impactful "free don't suck" presentation, evolution of Java application servers and middleware technologies, enterprise service bus and SOA, impact of docker and kubernetes on the industry, Red Hat's adaptation to cloud-native technologies, development of quarkus, current interest in language models and GenAI, Java's longevity and adaptability, Quarkus' fast startup time and compatibility with legacy Java EE applications, work on Kubernetes and Quarkus, the importance of Java's "write once, run anywhere" principle, Java's performance compared to other languages Burr Sutter on twitter: @burrsutter
Il voulait être pilote d'avion …
Mauricio Salatino is a software engineer at Diagrid working on the Dapr project but also serves as a chair for the newly formed App Development Working Group under the TAG App Delivery for the CNCF. He also serves as a member of the steering committee for Knative and the Keptn project. Mauricio authored a book about Platform Engineering on Kubernetes for Manning and co-authored some books on Jboss. He used to work for Red Hat and VMware. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week ArgoCD announced that ArgoRollouts now supports version 1.0 of the Kubernetes Gateway API Gateway API Supported providers Google has released Gemma 2 Links from the interview Dapr (Distributed Application Runtime) JBoss Overview of JNDI (Java Naming and Directory Interface) Secrets Management Overview on Dapr Knative Java Spring Boot App Development Working Group (Cloud Native Computing Foundation) Spring AI Langchain Dapr and service meshes Istio Vcluster Testcontainers
Summary A data lakehouse is intended to combine the benefits of data lakes (cost effective, scalable storage and compute) and data warehouses (user friendly SQL interface). Multiple open source projects and vendors have been working together to make this vision a reality. In this episode Dain Sundstrom, CTO of Starburst, explains how the combination of the Trino query engine and the Iceberg table format offer the ease of use and execution speed of data warehouses with the infinite storage and scalability of data lakes. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Join in with the event for the global data community, Data Council Austin. From March 26th-28th 2024, they'll play host to hundreds of attendees, 100 top speakers, and dozens of startups that are advancing data science, engineering and AI. Data Council attendees are amazing founders, data scientists, lead engineers, CTOs, heads of data, investors and community organizers who are all working togethr to build the future of data. As a listener to the Data Engineering Podcast you can get a special discount of 20% off your ticket by using the promo code dataengpod20. Don't miss out on their only event this year! Visit: dataengineeringpodcast.com/data-council (https://www.dataengineeringpodcast.com/data-council) today. Your host is Tobias Macey and today I'm interviewing Dain Sundstrom about building a data lakehouse with Trino and Iceberg Interview Introduction How did you get involved in the area of data management? To start, can you share your definition of what constitutes a "Data Lakehouse"? What are the technical/architectural/UX challenges that have hindered the progression of lakehouses? What are the notable advancements in recent months/years that make them a more viable platform choice? There are multiple tools and vendors that have adopted the "data lakehouse" terminology. What are the benefits offered by the combination of Trino and Iceberg? What are the key points of comparison for that combination in relation to other possible selections? What are the pain points that are still prevalent in lakehouse architectures as compared to warehouse or vertically integrated systems? What progress is being made (within or across the ecosystem) to address those sharp edges? For someone who is interested in building a data lakehouse with Trino and Iceberg, how does that influence their selection of other platform elements? What are the differences in terms of pipeline design/access and usage patterns when using a Trino/Iceberg lakehouse as compared to other popular warehouse/lakehouse structures? What are the most interesting, innovative, or unexpected ways that you have seen Trino lakehouses used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the data lakehouse ecosystem? When is a lakehouse the wrong choice? What do you have planned for the future of Trino/Starburst? Contact Info LinkedIn (https://www.linkedin.com/in/dainsundstrom/) dain (https://github.com/dain) on GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. Links Trino (https://trino.io/) Starburst (https://www.starburst.io/) Presto (https://prestodb.io/) JBoss (https://en.wikipedia.org/wiki/JBoss_Enterprise_Application_Platform) Java EE (https://www.oracle.com/java/technologies/java-ee-glance.html) HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) S3 (https://aws.amazon.com/s3/) GCS == Google Cloud Storage (https://cloud.google.com/storage?hl=en) Hive (https://hive.apache.org/) Hive ACID (https://cwiki.apache.org/confluence/display/hive/hive+transactions) Apache Ranger (https://ranger.apache.org/) OPA == Open Policy Agent (https://www.openpolicyagent.org/) Oso (https://www.osohq.com/) AWS Lakeformation (https://aws.amazon.com/lake-formation/) Tabular (https://tabular.io/) Iceberg (https://iceberg.apache.org/) Podcast Episode (https://www.dataengineeringpodcast.com/iceberg-with-ryan-blue-episode-52/) Delta Lake (https://delta.io/) Podcast Episode (https://www.dataengineeringpodcast.com/delta-lake-data-lake-episode-85/) Debezium (https://debezium.io/) Podcast Episode (https://www.dataengineeringpodcast.com/debezium-change-data-capture-episode-114) Materialized View (https://en.wikipedia.org/wiki/Materialized_view) Clickhouse (https://clickhouse.com/) Druid (https://druid.apache.org/) Hudi (https://hudi.apache.org/) Podcast Episode (https://www.dataengineeringpodcast.com/hudi-streaming-data-lake-episode-209) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Talkline With Zev Brenner with Shmiel Stern CEO of JBOSS Summit on The Business of Jewish Business. How to help companies make connections with Business leaders to help them grow.
Cavaleri is a member of the Board of Directors of CleanSpark, a publicly traded Bitcoin mining company, and CEO of DRE. She is also the Managing Director of APRÉS TECH, a Bitcoin advisory firm and host of the Bitcoin Ski Summit. She is also the Board Chair of the Bitcoin Today Coalition, a nonprofit (c)(4) focused on Bitcoin and Bitcoin mining education in the Capital. Cavaleri's entrepreneurial work resides at the intersection of emerging technology and wisdom. She has worked alongside world-renowned technology pioneers, Dr. David Chaum (founder of DigiCash & cryptographer) and Dr. Marc Fleury (professionalized open-source via JBoss). Cavaleri received her Master of Science in Technology Commercialization from the University of Texas at Austin's McCombs School of Business. Follow Amanda on Twitter https://twitter.com/amanda_cavaleri Partners: Coin Stories is powered by Swan Bitcoin the best way to build your Bitcoin stack with automated Bitcoin savings plans and instant purchases. Swan serves clients of any size, from $10 to $10M+. Visit https://www.swanbitcoin.com/nataliebrunell for $10 in Bitcoin when you sign up. If you are planning to buy more than $100,000 of Bitcoin over the next year, the Swan Private team can help. BITCOIN 2023 by Bitcoin Magazine will be the biggest Bitcoin event in history May 18-20 in Miami Beach. Speakers include Michael Saylor, Lyn Alden and Michelle Phan, plus a Day 3 music festival. Nearly 30,000 people attended Bitcoin 2022. Get an early bird pass at a steep discount at https://b.tc/conference code HODL for 10% off your pass. Fold is the best Bitcoin rewards debit card and shopping app in the world! Earn Bitcoin on everything you purchase with Fold's Bitcoin cash back debit card, and spin the Daily Wheel to earn free Bitcoin. Head to https://www.foldapp.com/natalie for 5,000 in free sats! Health insurance needs an overhaul. The government and insurance companies have jacked the price, increased complexity, and made insurance almost unusable. You send your money to the health insurance black hole and never see it again. Then, when you get hurt you have to send them more money. The great news is now you have an alternative: CrowdHealth. It's totally different from insurance. Instead of sending your hard earned money to an insurance company, you hold your money in an account CrowdHealth helps you set up when you join. You can even convert dollars in that account into Bitcoin. When someone in the community has a health need, you help them out directly and if there is Bitcoin or $ left over in your account when you leave, you take it with you. https://www.joincrowdhealth.com/natalie With iTrustCapital you can invest in crypto without worrying about taxes or fees, through an individual retirement account. IRAs are tax-sheltered accounts, which means all your crypto trading is tax-free and can even grow tax-free over time. The best part is it's totally free to open an account, and there are no hidden fees, monthly subscriptions or membership fees. Your account is FDIC insured up to $250,000. Get a $100 funding bonus if you open and fund an account. Go to https://itrust.capital/nataliebrunell to learn more and open a free account. OTHER RESOURCES Natalie's website https://talkingbitcoin.com/ VALUE FOR VALUE — SUPPORT NATALIE'S SHOWS Strike ID https://strike.me/coinstoriesnat/ Cash App $CoinStories BTC wallet bc1ql8dqjp46s4eq9k3lxt0lxzh6f2wcu35cl6944d FOLLOW NATALIE ON SOCIAL MEDIA Twitter https://twitter.com/natbrunell Instagram https://www.instagram.com/nataliebrunell Linkedin https://www.linkedin.com/in/nataliebrunell Producer: Aron Bender https://www.linkedin.com/in/aron-bender/ DISCLAIMER This show is for entertainment purposes only and does not give financial advice. Before making any decisions consult a professional. #bitcoin #cryptocurrency #money
An airhacks.fm conversation with Sascha Moellering (@sascha242) about: Schneider CPC, starting programming with C-16, enjoying Finger's Malone, upgrade to C-128, playing Turrican, Manfred Trenz created Turrican and R-Type, publishing a Pommes Game, programming on Amiga 1200, math in game development, implementing a painting application, walking through C pointer and reference hell, from C to Java 1.0 on a Mac 6500 with 200MHz, using Metrowerks JVM, using CodeWarrior, CodeWarrior vs. stormc, Java is a clean language, working on SpiritLink, using Caucho Resin, starting at Accenture, from Accenture to Softlab, building a PaaS solution with JBoss for Allianz, managing hundreds of JVMs with a pizza team, implementing a low latency marketing solution with Vert.x, starting at Zanox, an episode with Arjan Tijms "#184 Piranha: Headless Applets Loaded with Maven", starting at AWS as Account Solution Architect, using quarkus on lambda as a microservice, using POJO asynchronous lambdas, EJB programming restrictions and Lambdas, airhacks discord server, Optimize your Spring Boot application for AWS Fargate, Reactive Microservices Architecture on AWS, Field Notes: Optimize your Java application for Amazon ECS with Quarkus, Field Notes: Optimize your Java application for AWS Lambda with Quarkus, How to deploy your Quarkus application to Amazon EKS, Using GraalVM to Build Minimal Docker Images for Java Applications Sascha Moellering on twitter: @sascha242
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
An airhacks.fm conversation with Kelsey Hightower (@kelseyhightower) about: HP laptop and playing Age of Empires, programming calculators with TI-BASIC, playing Mario on NES, enjoying the Metroid on NES, working at Google datacenter as contractor, bash is a programming language, working for a financial institution, modernising COBOL with Java, rewriting Cobol to python, learning Java and using JBoss, contributing to Python to make it better, venv (virtualenv) and pypy, using Puppet for configuration management, python vs. Ruby, overengineering with Java, Java is lean now, creeating the confd project, envsubst and Java, Cost Driven Architectures in the clouds, replacing Java with GO, starting at CoreOS, etcd as coordinator, implementation of RAFT, RAFT and cluster membership, contributing to Packr and Terraform, docker is written in GO, RAFT is understandable Paxos, RAFT did not consider bootstrapping, Apache zookeeper is used for coordination, Apache BookKeeper, CoreOS fleet, rkt vs. docker, salt configuration maangement, kubernetes pod, the status field in kubernetes, Google Service Weaver, Google App Engine, checkout episode: "#153 Java, Serverless, Google App Engine, gVisor, Kubernetes", writing modular code is important, monoliths and microservices, rust is leaking details, Kubernetes The Hard Way the step by step guide, Kubernetes Autopilot Kelsey Hightower on twitter: @kelseyhightower
An airhacks.fm conversation with Heinz Kabutz (@heinzkabutz) about: Heinz previously on airhacks.fm "#215 Karatsuba, Megamorphic Call-sites, Deadlocks and a bit of Loom", a contribution to jdk, 2022 in review, Nicolai Parlog on airhacks.fm "#206 Java 19: Millions of Threads in No Time", newsletter: Contributing BigInteger.parallelMultiply() to OpenJDK, The Java Module System book by Nicolai Parlog, JEP 192: String Deduplication in G1, String.intern, G1 and deduplication, JDK Mission Control, xdoclet for Java EE deployment, destroying G1 with a LinkedList and millions entries, Java Records as data transporters, interfaces as factories, Teardown of ArrayBlockingQueue, WeakReferences and ArrayBlockingQueue, ExecutorService in Java 19 is AutoCloseable, Java iterators and memory leaks, Weak references in Swing, Real World Visitor with Pattern Matching for instanceof in AWS CDK, JSR 356 - Java API for WebSocket Eclipse Tyrus, JEP 238: Multi-Release JAR Files, Create a Custom, Right-Sized JVM with jlink, streaming events with JEP 328: Flight Recorder, var for everything, the new Project Coin and private interface methods, System.out.printf is working, jshell for javadoc, JVM logging, System.logger and java.util.logging, System.Logger--the minimalistic logging interface in Java 9, Serialization Filtering, What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability Heinz Kabutz on twitter: @heinzkabutz
When the cats away the mice will play. JBoss was unable to record so sports are discussed. Nico is running trivia with a whole new energy. We finally draft disgusting and/or bad habits.
An airhacks.fm conversation with Daniel Lipp (@dynamic_123) about: starting to program CPC Schneider in the store, Basic and Logo, the first floppy disk to save the work, writing a senso game, Mandelbrot caclulations locked the computer for days, wiring computers on vacations, finding hidden files of Werner the German rocker game, Logo looks like assembly, starting physics and learning Turbo Pascal, from Basic to Visual Age SmallTalk, math formulas as code, memory leaks in C++, SmallTalk solved memory leaks, SmallTalk over Java, migrating from SmallTalk to Java, the elegance of SmallTalk, overriding a non-existing method in SmallTalk, Visual Age for SmallTalk over Visual Age for Java, the non-extendible Java currency class, recompiling the java.util.Currency class, writing a Java persistence layer, modernising with Java EE 5, writing Eclipse RAP clients, it is hard to maintain the spirit in fast growing companies, starting at open source CMS startup, migrating to openshift and containers, migrating microservices from JBoss to Quarkus, saving memory and CPU with Quarkus, saving money with quarkus, migrating from Java EE to Quarkus with minor code adjustments, the same old, serverless, architecture, Daniel Lipp on twitter: @dynamic_123 and Instagram: dynamic_dli
An airhacks.fm conversation with Alejandro Pablo Revilla (@apr) about: Commodore 64, Morse code and RTTY, long distance radio, a signal goes around the world, programming low level assembler, the 6510 assembly, increasing a counter in ROM as copy protection, Commodore 128k ran on z80, dBASE runs on CPM and z80, starting with clipper, migrating from Clipper to Java, using Apache POI to access Exccel, spending thoursands of dollars per month for telephone lines, running on BBS networks, using UUCP, cts.com provided UUCP services, from Borland Turbo C to running Lattice C, unix and minix, xinu, Xenix, qnx and VMS, founding the compuservice company inspired by BIX, starting the jPOS Software company, starting JavaPC, green threads and Project Loom, using Java blackdown by Johan Vos checkout episode "#6 Mobile Java", the Orion Application Server became OC4J, EJB 1.0 relied on Java serialization for configuration, XML deployment descriptors were introduced with EJB 1.1, writing own application launcher inspired by JBoss, writing a JMX micro-kernel, QSP v2 was called Q2, Alejandro's project / companycompany: JPOS, Alejandro Pablo Revilla on twitter: @apr
An airhacks.fm conversation with Prof. dr. Matjaz Juric (@matjazbj) about: checkout past episodes with Prof. dr. Matjaz Juric "#158 Kubernetes, KumuluzEE, MicroProfile and Clouds", "#151 Modularization, Monoliths, Micro Services, Clouds, Functions and Kubernetes", "#136 From ZX Spectrum over Clouds To Winning the Java Duke's Choice Award", the Kumuluz Digital Platform, the omni-channel architecture, the KumuluzCrowdsensing platform, EV charging, battery State of Charge estimation, project edison winci runs on KumuluzEE and MicroProfile, using service discovery for locating microservices, service discovery implements client-side load balancing, KumuluzAPI is an extension of the kubernetes ingress controller, decentralising an API Gateway with "smart proxies", API gateway fault tolerance pattern integration, MicroProfile API gateway integration, canary releases and A/B Deployments, JBoss smart proxies and MicroProfile JAX-RS client, the costs of cloud-agnostic deployments, on-premise Kubernetes is a must, going serverless with Kumuluz Functions, cost-driven development in the clouds, kubernetes is expensive to operate, kubernetes clusters are often over-provisioned, solving problems differently with event-driven approach, Prof. dr. Matjaz Juric on twitter: @matjazbj and at University of Ljubljana
An airhacks.fm conversation with Jürgen Albert (@JrgenAlbert6) about: C64 and Logo, 286, 486 then Pentium, starting with PHP, learning Java 1.4 and Java 5, studying in Jena - the optical valley, Intershop and Stephan Schambach, Intershop was written in Perl, writing eBay connectors with Java, Java Server Pages, Tomcat and Java Data Objects (JDO), Java Persistence API JPA, writing a J2ME app store, Using TriActive JDO TJDO, using Geronimo Application Server, working with Java EE, JBoss and Glassfish, starting Data In Motion company in 2010, building a statistics tool for Bundesamt fuer Risikobewertung, creating smartcove the product search and price comparison engine, building video supported therapy software with Java, parsing video streams with Java, Eclipse RCP, code reuse with OSGi and Gyrex, GlassFish and OSGi, modeling Eclipse Modeling Framework (EMF), Eclipse GMF and openArchitectureWare, the IDE wars, the meetup.com/airhacks message, modular system in long term projects, microservices vs. JARs, versioning bundles and plugins, package versioning, the chair of Eclipse OSGi Working Group, Sun started with OSGi, declarative OSGi services, there overlap between OSGI and Eclipse Plugin Development Environment, "#79 Back to Shared Deployments with Romain Manni-Bucau", Jürgen Albert on twitter: @JrgenAlbert6, Juergen's company: Data In Motion
An airhacks.fm conversation with Theresa Nguyen (@RockClimberT) about: Apple II ES with blue screen and yellow font, 3h to install an OS on 386 machine, enjoying minesweeper and Tetris, playing frogger on a flashback Atari, learning Pascal at high school, learning how the brain works, ambition, motivation, attitude and dedication, computers had better keyboards, than typewriters, enjoying Word Perfect, humans over computers, joining Caucho, caucho is the home of Resin application server, meeting at theserverside conference, Jakarta EE, TomEE and MicroProfile, Sun's Microsystem spirit at Microsoft, the importance of opensource software, standardization is freedom of choice, Microsoft at JavaOne, joining Microsoft in 2018, enabling JBoss EAP on Azure, the official Maven archetype from Microsoft, quarkus JAX-RS resource as Java function, JBoss EAP runs on Azure App Service, Azure Service Bus is JMS compliant, the episode 111 about Azure and JMS, JBoss vs. Wildfly on Azure, WildFly on virtual machines and scale sets, serverless JBoss on Azure, Java For kubernetes, j4k conference, Theresa Nguyen on twitter: @RockClimberT
An airhacks.fm conversation with David Blevins (@dblevins) about: Code Generation with bash, bash is your best friend, scripting as documentation, learn first, then automate, an opportunity to work on an EJB container, working on EJBOSS, working with the great Richard Monson-Haefel, co-founding openEJB with Richard, bluestone and gemstone servers, exolab was an incubator, openJMS, openEJB and castor, working with Apple to integrate openEJB with Apple's WebObjects, openEJB on Apple's WebObjects box, from experience to cash, the concept of isolated containers in openEJB, Dain Sundstrom wrote CMP for JBoss, Rickard Öberg started at openEJB for two weeks, creating Geronimo in 2003 as competitor to JBoss, announcing Geronimo at theserverside.com, Geronimo was over engineered, good idea at a bad time is a bad idea, Convention over Configuration vs. explicit configuration, openEJB's Java Serialization was faster than WebLogic's T3, Geronimo's configuration was not portable, joining gluecode, gluecode was sold to IBM, Jason van Zyl was the creator of Maven, Jason van Zyl created Sonatype, jelly - the executable XML, Maven 2 rollout was tested with openEJB, switching from codehouse to Apache, 600 people were working on WebSphere, Dan Allen was working on arquillian, Arquillian used internally openEJB, JBoss 7 became Wildfly, creating TomEE after JavaOne 2010, TomEE stopped consulting, tomitribe provides support for TomEE, Tomcat, ActiveMQ, TomEE 9 starts in 2 seconds, TomEE passes the TCK with 64MB RAM, TomEE lost access to TCK in 2013 before Java EE 7, TomEE got access in December 2019, TomEE is working on MicroProfile 4.0, TomEE uses Apache Johnzon JSON-P, TomEE uses Apache projects to implement Jakarta EE and MicroProfile specification, TomEE uses BeanValidation for JWT validation, using BeanValidation for authorization with custom data in JWT, Tribestream - the API Gateway, David Blevins on twitter: @dblevins and David's company: tomitribe
Show support appreciated: https://donations.cryptovoices.com Show Sponsor: https://hodlhodl.com/join/cryptovoices Matthew and Alec interview Amanda Cavaleri, investor and entrepreneur. We discuss Bitcoin surrounding the current language of the pending infrastructure bill in DC, regulation challenges, many pertinent privacy issues today, and more. Amanda Cavaleri runs APRÉS.TECH, a Wyoming-based Bitcoin startup, fund, and mining advisory. She is also COO of Pearl Snap Capital, an investment management firm currently focused on public structured equities in the technology sector. Ms. Cavaleri has worked with global and domestic Digital Asset Hedge Funds, Venture Equity Funds, and a Bitcoin Mutual Fund. Previously, she was also a startup executive in privacy and cybersecurity alongside Dr. David Chaum, cryptography pioneer and founder of DigiCash, and led global R&D and investor relations for Dr. Marc Fleury, founder of JBoss and open source development pioneer. Ms. Cavaleri holds a Master of Science in Technology Commercialization from the McCombs Business School of the University of Texas at Austin. Previously, she was an entrepreneur in health care, was an AARP Innovation Fellow in Washington, D.C., and held the role of Thought Leader with Carnegie Mellon University & UPMC's Quality of Life Technology Center. Ms. Cavaleri has guest lectured at domestic and international graduate schools about emerging technology as well as taught foreign diplomats about the geopolitical digital currency and privacy landscape via the U.S. Department of State's IVLP Exchange Program. Listen on to learn more. Links for more info: https://twitter.com/Amanda_Cavaleri https://www.apres.tech/ https://pearlsnapcap.com/about Show Sponsor: https://hodlhodl.com/join/cryptovoices Hosts: Matthew Mežinskis, Michel, Alec Harris Music: New Friend Music newfriendmusic.com/ Site: cryptovoices.com/ Podcast & information Bitcoin, privacy, cryptoeconomics & liberty Thanks for listening! Show content is not investment advice in any way.
[קישור לקובץ mp3] שלום וברוכים הבאים לפודקאסט מספר 414 של רברס עם פלטפורמה - התאריך היום הוא ה-18 ביולי 2021, ואנחנו בעיצומו של גל איש-לא-יודע-כמה, וגם לא יודעים האם זה עיצומו . . . השעה היא 2100 בערב, שעון יקנעם-עילית, ויונתן מצטרף כ-Co-Host - היי יונתן! טוב שאתה פה איתנו שוב.והיום נמצא איתנו לירן מחברת Rookout - ברוך הבא, מה שלומך?(לירן) מצויין - קצת מאוחר, אבל זו שעה מצויינת לפודקאסט.(רן) לתל אביבים זה כמעט בוקר . . .אז תיכף תספר לנו קצת מה איתך ועל החברה - הנושא שלנו להיום, בקצרה, זה האתגרים המעניינים שיש מאחורי דברים שהם Cloud-Native, שזה בעצם גם העיסוק של החברה שלך.אז ככה בשתי מילים - מי אתה? מה אתה עושה? מה עשית לפני זה?(לירן) אז אני לירן חיימוביץ', Co-Founder ו-CTO ב-Rookoutלפני שהצטרפתי ל-Rookout הייתי איזה עשר שנים במשרד ראש הממשלה, בוגר קורס אר”מ, למי שככה, מתעסק בדברים האלה.בגדול, לפני חמש שנים החלטתי להקים סטארטאפ - וכמה חודשים אח”כ קרמה עור וגידים Rookout . . .ו-Rookout היא חברה שמספקת כלים למפתחים, שמאפשרים להם “לצלול” לתוך הקוד לשהם, לדבג אותו להבין מה הוא עושה - גם כשהוא רחוק.אני לא חושב שיש משהו יותר רחוק, היום, מאשר Cloud-Native ו-Cloud בכלל.(רן) כן . . . אז מניח שאת המונח “Cloud-Native” לא מעט מהמאזינים שמעו [יש קרבורטור], יש גם ממש ארגון - CNCF - Cloud Native Computing Foundation , ואני מניח שזה שגור בפי רבים, And yet - כל אחד שומע את זה וכנראה מבין משהו אחר, מתכוון למשהו אחר.לפי ראייתך - מה המשמעות של Cloud-Native?(לירן) וואו, “לפי ראייתי” . . . קשה לי קצת להגיד שזה לפי ראייתי ולהגדיר את זה, אבל איך שאני תופס את זה, זה שלפני 10-15 שנים התחיל עולם ה-Cloud, עם ה-S3 של AWS ועם ה-Google App Engine ועם טכנולוגיות כאלהובהתחלה, התצורה הייתה יותר כזה “Lift & Shift” - בוא ניקח את האפליקציות שכתבנו ל-Data Centers ונריץ אותן בענןומה שהבנו, תוך כמה שנים, זה שאנחנו לא מנצלים את המקסימום שהענן יודע לתת לנו, את המקסימום שהענן יודע להציע לנויש המון יתרונות, שאפשר לדבר עליהם שעות - אני לא אכנס לזה עכשיו, כי על זה . . . נדבר רק על היתרונות של הענן . . .ובעצם - Cloud-Native זה אוסף של טכנולוגיות, אוסף של תפישות, אוסף של שיטות עבודה - שנועדו לאפשר לנו לבנות את אפליקציות שלנו בצורה אחרת, בצורה שיותר ממנפת את היתרונות הייחודיים של הענן, את האלסטיות שלו, את ה-Scale שהוא מאפשר לנו - ובעצם לבנות אפליקציות גדולות יותר, טובות יותר, מודרניות יותר.(רן) אז אם אני אסתכל רגע, לדוגמא - דיברת על Lift & Shift, אז נגיד שהיה לי איזשהו שירות Backend-י, שמכיל נגיד 50 מכונות - אז אני יכול לקחת את אותן 50 מכונות ורק להרים אותן באחד מספקי הענן - ולזה אנחנו קוראים Lift & Shift.כנראה שזה יעלה לי הרבה יותר . . . כי עלות של מכונה On-demand היא יותר יקרה מאשר מכונה שהיא כבר שלי, אם קניתי אותה.היתרון המשמעותי של הענן זה שהוא מאפשר לך לא להחזיק את כל ה-50 בכל זמן נתון, למשל . . .(לירן) עצם העובדה שאתה מסתכל על זה כעל “50 מכונות” - זה בדיוק התפיסה של Data Center . . . אתה מתכנן מראש - “אני, בשביל לעמוד ביעדים שלי, צריך להגדיר 50 מכונות” - אני יודע כמה CPU, כמה RAM, כמה דיסק-קשיח יהיה בכל אחת מהן,אני יודע מה יהיה התפקיד של כל אחת מהןואני חושב במונח הזה של “50 מכונות”היום, ב-Cloud, אנחנו יכולים להרים מכונה בסדר גודל של בין 15 ל-60 שניות, הרבה פעמיםלפעמים כמה דקות, תלוי בתפקיד שלה.ו-Container-ים אנחנו יכולים להרים לפעמים בשניות בודדות וזה מאפשר לנו לחשוב בעולם אחר -מאפשר לנו לעלות הרבה יותר מהר, לרדת הרבה יותר מהראנחנו יכולים לתכנן את ה-Capacity שלנו ברמה של דקות קדימה - ולא שנים או חודשים.(רן) אז אלה חלק מהיתרונות של הענן - ואמרנו באמת שלא נבזבז את כל הפודקאסט בלדבר על היתרונות, אבל כמובן שאלו חלק מהיתרונות.אולי צריך לציין שזה לא תמיד היה ככה - כשהתחיל -S3, או כשהתחיל EC2, אז הדברים לא היו בהכרח ככה. זאת אומרת - להרים מכונה יכול היה לקחת דקות ארוכות, מחירי ה-Storage היו שונים . . . אבל עם הזמן זה משהו שבהחלט קרה, ונולדו טכנולגיות חדשות - Lambda לדוגמא, ויש עוד דוגמאות אחרות - שבעצם מאפשרות שינוי פרדיגמה, שינוי שיטה.אבל - עם כל דבר טוב, גם יש כמה אתגרים . . . אז יש לא מעט אתגרים בלאמץ Cloud-Native, ובעצם פה אתה . . . על זה אנחנו רוצים לדבר.אילו אתגרים מעניינים אתה חושב שכדאי להתחיל איתם?(לירן) אז אני אהיה קצת אנוכי, ואני אסתכל על זה מהפרספקטיבה שלי - בסוף, רוב הקריירה שלי, רוב הרקע שלי היה כמפתח תוכנה, כמהנדס תוכנה, ואני אנסה להסתכל על זה מהפרספקטיבה שלי - של איפה מפתחי תוכנה “סובלים” בעולם ה-Cloud.ואחד הדברים שקורים למפתחי תוכנה זה שפתאום הם מאבדים שליטה - אם פעם היינו מריצים את האפליקציה שלנו עם איזשהו שרת Java Enterprise או איזשהו WSGI ב-Python, משהו כזה שקל להרים מקומית, פתאום אנחנו עוברים לטכנולוגיות שנבנו עבור הענן.נורא קל וטוב ומדהים להריץ אותן בענן - אבל יכול להיות פתאום נורא קשה להריץ אותן על ה-Laptop שלנו . . . .בין אם זה Serverless, שלא באמת קיים על ה-Laptop שלנובין אם זה ב-Kubernetes, שהוא מאוד מאוד גדול ויקר במשאבים, בעבודה מקומית ובעבודה קטנה - הוא מדהים ב-Cloud ולא כזה טוב אצלנו.בין אם זה כל מיני תלויות ב-Cloudבין אם זה כל מיני שירותים של AWS - זה SQS ו-SNS ו-Databases מנוהליםפתאום בכל הדברים האלה, כשאתה מתחיל לעבוד מקומית - זה מאתגר.או שאתה עובד עם ה-Cloud המרוחק בכל פעם, עם בעיות Connectivity ו-Latency ועם חוסר או פחות שקיפות, או כשאתה מרים כל מיני סימולטורים אצלך, שהם הרבה פחות איכותיים והרבה פחות מסמלצים . . .מן הסתם, ככל שאתה עובד יותר עם עם סימולטורים ופחות עם “סביבת האמת”, ככה התוצאות שלך ישתנו ברגע שתעבור מה-Dev ל-Staging או ל-Production.(רן) אז אנחנו בעצם מדברים על חוויית המפתח, שהיא נפגעה . . . דרך אגב, שווה להגיד שלפני, אני חושב, שני פרקים דיברנו על הנושא של Serverless עם ינון מ-Via וגם הנושא הזה עלה - ואני חושב שהבעיה היא די ברורה: כל מי שאי פעם פיתח פונקציות Lambda או המקבילים שלהן מבין את הבעיה - זה רץ בענן, אבל להריץ על המחשב שלך . . . אולי תצליח, אבל זו לא תיהיה אותה הסביבה.וכמובן הזכרתי את כל השירותים שמסביב - אם אתה צריך איזה SQS אם אם אתה צריך S3 או אם אתה צריך משהו אחר, אז אתה צריך או להשתמש ב-Service המרוחק, ואז עדיין יש לך חווייה גרועה כי זה איטי - או להשתמש באיזושהי סימולציה מקומית, אבל אז לא בטוח שהסימולציה באמת פועלת אותו הדבר, לצורך העניין . . . אולי לדברים הבסיסיים כן, אבל הרשאות או דברים כאלה לא תמיד עובדים כמו שצריך - ואז אתה מקבל את הזבנג שלך ב-Production.אוקיי, אז האתגר של חוויית הפיתוח . . . דרך אגב, יונתן - אצלכם יש Workloads שהם גם ב-Cloud וגם לא ב-Cloud [פרק 382 Carburetor 27 - k8s and multi-cloud], באיזו גישה נקטתם בהקשר של חוויית מפתח?(יונתן) אז אצלנו רוב האפליקציות ורוב ה-Services רצים ב-Cloud - אבל כזה שהוא שלנו, זאת אומרת Private Cloud ולא Public Cloud.אנחנו, מבחינת Debugging - אולי נדבר על זה אח”כ, יש פה גם עניין של גישות, אני חושב, של האם אתה רוצה בכלל לדבג (Debug) ב-Production ואיך אתה עושה את זה - אבל מבחינת סביבת הפיתוח עצמה, אנחנו עובדים ב-Remote - זאת אומרת שאתה מריץ את ה-Service שאתה רוצה לדבג (to debug) אותו לוקאלית, וכל שאר ה-Services שאתה נסמך עליהם, Databases וכו', הם ב-Remote.(רן) ואם הם צריכים אותך, דרך אגב? יש איזשהו Tunnel כזה שגם הם יכולים לקרוא לך?(יונתן) אם הם צריכים אותך - לא . . . אבל אם אתה צריכים שניים שמדברים אחד עם השני, אז אתה יכולים להרים את שניהם לוקאלית, שזה נגיד יחסית פשוט.אבל לא תוכל לקבל פתאום, לא יודע . . . הודעות Kafka משירות אחר.(לירן) הודעות Kafka או webhooks נגיד, שמכניסים . . . וגם - להרים Service אחד, בטח אם זה Service שאתה מכיר, אותו ועובד עליו טוב זה עוד קל - ברגע שאתה מצרף אליו את ה-Service השני זה כבר יותר מורכב, במיוחד אם זה Service שאתה פחות מכיר, או Service של חבר שלךוזה נוטה להיות אקספוננציאלית-יותר-קשה ככל שהמספר עולה . . .כשמדברים עליו Design “נכון” של Cloud Native, כשאתה חושב על זה . . . אם אתה עושה אינקפסולציה (Encapsulation) נכונה, אתה תמיד עובד על Service אחד, אפילו עובד עם Unit Testing - והכל מדהים.אבל ברגע שהאבסטרקציה (Abstraction) הזו מתחילה להישבר, ברגע שאתה צריך שני microServices או שלושה microServices, אז זה ניהיה הרבה יותר Messy [לא זה] ו“מציק”.(רן) גם בוא לא נשכח, שאני מניח שהרבה מהלקוחות לא מתחילים מוצר מאפס . . . הרבה מהלקוחות הם אולי במצב קצת יותר טוב מ-Lift & Shift, אבל בכל זאת הם לא מתחילים את כל הארכיטקטורה שלהם מאפס.אז עושים כמה התאמות לענן, כדי באמת להינות מה-Benefits שלו, אבל עדיין לא הכל כל כך נקי וברור - ולא תמיד משתמשים באבסטרקציה (Abstraction) הנכונה, ואז זה ניהיה יותר מורכב.אז איך יוצאים מהסמטוחה הזאת? . . .(לירן) אז באמת, כמו שיונתן אמר, יש את הגישה ה . .. נקרא לזה “אופטימלית”, שבאה ואומרת “אני מרים את הקוד שלי על המחשב שלי, וכל השאר שיהיה מרוחק, שיהיה בענן”שזה עובד חלק מהזמן . . . זה בעיקר עובד כשה-Scope מוגדר היטב, כש”המערכת יחסים” פשוטה, כשאפשר לבדוק טוב מאוד עם Unit Testingובעיקר כשהשירותים בענן לא צריכים אותי - כשאני לא צריך לקבל דברים מה-Kafka, כשאני לא צריך לקבל דברים ב-webhooks, כשלא צריך לפתוח Tunneling אלי.זה יכול לעבוד מאוד טוב - ואז אני באמת יכול לעבוד עם כל הכלים המסורתיים שלי.דרך אגב - למי שמתעניין בזה, יש ל-Kubernetes כלי שנקרא Telepresence: זה Open source שעוזר לעשות את זה.זה עדיין לא תמיד הדבר הכי קל והכי פשוט - אבל זה יכול קצת לעזור עם Port-Forwarding ו“שטויות” אחרות.(רן) דרך אגב - מאוד מעניין איך הוא עובד, ברמת ה-Networking, אבל זה לפודקאסט אחר . . . יש שם הרבה טריקים ושטיקים . . . טוב, כל Kubernetes עושה שטיקים ברמת ה-Networking, אבל גם Telepresence באופן ספציפי . . .(לירן) שטיק אחד גדול זה, Kubernetes . . . ואז יש לך את האופציה - שתי הקיצוניויות האחרות:אחת זה באמת להרים את כל הסביבה מקומית - שזה הולך וניהיה יותר ויותר קשה ככל שהסביבה יותר מורכבת, אבל אם יש לך נגיד שניים-שלושה-חמישה, אולי עשרה microServices, אתה עוד יכול להסתדר עם זה.אני כן אגיד שהרבה פעמים זה כאב ראש - הרבה פעמים אתה מוצא את עצמך מתחזק בעצם שני סטים של Deployment-ים, נגיד Kubernetes ו - Docker-Compose מקומית.אפילו אם אתה עושה Kubernetes מקומית - עדיין כנראה שה-Load Balancer יהיה שונה מקומית ומרוחק, יכול להיות שה-Database יהיו שונים ומרוחקים, אופרטורים . . . כל מיני Provider-ים שנמצאים בסביבה ה-Cloud-ית לא בהכרח יהיו זמינים מקומית, ואתה תמצא את עצמך מתחזק שתי קונפיגורציות.והאופציה השלישית, שהיא לקחת את הכל ל-Cloud - להגיד ש”אני מרים את כל הסביבה שלי ב-Cloud”, ואז בעצם כל שינוי בסביבה זה בעצם אומר איזשהו תהליך Deployment ו-CI/CD ו-Build.יש כלים, פה ושם, כמו Skaffold, כמו Tilt, כמו Garden, שעושים לזה אופטימיזציה ומנסים לעשות את זה הכי קל והכי מהר.אבל זה עדיין שרת מרוחק, שאתה מפתח עליו, שאתה מנטר אותו מרחוק - ואין לך את אותה רמה של יכולת “לצלול לתוך הקוד” שלך ולהבין אותו, כמו שאתה יכול מקומית.(רן) אני חושב ש . . . (א) יש כל מיני קומבינציות שונות, אבל קטיגורית יש גם את האופציה של Dev-Container - לפתח על Container מרוחק, שאולי נמצא בתוך ה-Datacenter, וכל מה שאתה עובד עליו זה איזשהו Frontend, איזשהו IDE שמדבר איתו, אבל ה-Codebase עצמו והקומפילציה והכל נמצאים מרוחק.אבל פה, דרך אגב, אני חייב להגיד שמעבר לחוויית המשתמש - דיברנו על Latency, דיברנו על חוויית המפתח - אבל מעבר לזה, יש גם את העניין הזה של “אתה מלכלך”... אתה מלכלך את Production, אתה משתמש בדאטה של Production, אתה יכול בטעות “לשתות” הודעות מ-Kafka שלא היית אמור לשתות, או לכתוב ל-Database שלא היית אמור לכתוב אליו - וזו בעיה לא של חוויית מפתח, זו בעיה של הנכונות של ה-Production . . .(יונתן) או של ה-Isolation . . . (רן) !Isolation - זו המילה שחיפשתי!(לירן) יש הרבה חברות שבהן זה לא בא בחשבון בכלל להתקרב ל-Production בתור מפתח - ואז כן, איך אתה עושה בעצם Isolation?האם אתה רץ באותו Cluster? ב-Cluster נפרד? על אותו Account או ב-Account שונה?הרבה אתגרים . . . .העולם הזה, של Remote Development, הוא סופר מעניין - יש Startup שנקרא Gitpod, אם אני זוכר נכון, שמתעסק עם זהגם GitHub הוציאו עכשיו איזושהי וריאציה של vscode שהיא Purely hostedאבל ממה שאני רואה וממה שאני קורא, זה עדיין לא שם.זה סופר-מגניב וזה וסופר-מבטיח, אבל לא הייתי ממליץ לאף אחד לבנות את ה-Development Environments שלו על . . . (רן) אני מכיר כבר כמה שעושים את זה . . .(לירן) באמת?(רן) . . . לא חברות גדולות . . .אבל כן.(יונתן) אם אני לא טועה, אפילו באינטל, לפני 15 שנה, עבדו ב-VNC על שרתים מרוחקים - וככה עבדו.(רן) יכול להיות - אבל האם המפתחים אהבו את זה?(יונתן) שאלה . . .(לירן) יש עכשיו כל מיני Web-first IDEs שנועדו להיות Hosted, ואמורים לתת חווייה מאוד טובה, אבל הבעיה היא שוב - עד כמה הם יכולים לבנות סביבת פיתוח מלאה.זה לא רק להריץ את הקוד - זה להריץ אותו, זה לדבג אותו, זה לספק את כל המעטפת שאתה רגיל ואוהב מה-Laptop שלך.(רן) ובכל אופן - את בעיית ה-Refresh, שדיברנו עליה קודם - זה לא פותר . . . זה אולי עושה אותה אפילו יותר גרועה, במובן הזה שעכשיו זה נורא קל להריץ דברים בתוך ה-Datacenter של Production, אז למה שלא תעשה את זה כל הזמן? . . . הנה - שכחת איזשהו Service באוויר ופתאום הכל נתקע בלילה. . . אילו פתרונות, דה-פקטו, אתה רואה שאנשים באמת מוצאים בשטח?(לירן) אז האמת שאנחנו רואים שאנשים מאמצים קצת מכל דבר, איזשהו שילוב של הדבריםקצת יש לי פרספקטיבה - אתה יכול להגיד שאף אחד מהפתרונות האלה לא טובים, ואתה יכול להגיד שהפתרונות האלה, כל אחד מהם טוב למשהו ספציפי.אבל אף אחד מהם לא נותן מענה לכולם כל הזמן.בסוף, אנחנו רואים שכל חברה שאנחנו עובדים איתה, כל חברה שאני מדבר איתה, מוצאת איזשהו שילובמן הסתם, ככל שאפשר לעבוד יותר מקומית אז זה יותר קל, וזה משהו שמפתחים מתרגלים אליו.אבל הרבה פעמים זה לא עובד - ואיפה שזה לא עובד, אז עוברים לדברים היותר מורכבים - להריץ את הכל Containerized מקומית, להריץ את הכל ב-Cloud, זה נורא תלוי ב-Use cases.דיברנו קצת על ה-Use case של ה-Incoming Data, של “אני רוצה עכשיו להרים webhook או להרים API ולראות מה קורה כשפונים אליו - אז כנראה שאני אצטרך להרים אותו ב-Cloudלעומת זאת, אם אני יכול יותר למשוך Data מאיזשהו Database, יש סיכוי טוב שאני אוכל להריץ את הקוד מקומית, עם איזשהו Batch Process, ולדבג אותו תוך כדי - והחיים שלי יהיו יותר יפים.ואז, בעצם, כשאתה מריץ את הקוד מרוחק, אז אתה היום קצת נופל לכלי-Production . . . זאת אומרת, אתה כבר לא יכול לעבוד עם ה-Debugger כמו שאתה רגיל, ואתה גם לא יכול לערוך קוד on-the-fly ולראות את זה.עובדים הרבה יותר באוריינטציה כמו שהיית עושה Troubleshooting ב-Production - עובדים עם כלי Observability, עם לוגים, עם מטריקות, עם Tracing - ומנסים להשתמש בכלים האלה כדי להבין מה קורה עם הקוד.מכיוון שה-Deployment-ים הם הרבה יותר איטיים, הרבה יותר מוסרבלים.(רן) כן, וזה, נראה לי, מביא אותנו גם קצת לאתגר הבא, של המורכבות , זאת אומרת - אם בעבר דברנו על ה-Scenario שהיה לך איזשהו AppServer, ובתוך ה-AppServer הייתה לך לוגיקה נורא-נורא מסובכת, אבל כל זה היה בתוך איזשהו Server בודד, או אולי פרוש על איזה Server אחד או שניים - נגיד AppServer ו-Database, אבל לא הרבה יותר מזה - היום, למעשה, לוגיקה פרושה על פני מספר Server-ים, אולי מספר פונקציות, תורים, Database-ים, Hook-ים ועוד הרבה מאוד פטנטים אחרים . . . חלקם חדשניים וחלקם אולי לא כל כך - אבל לפעמים אתה מגלה שנגיד HTTP Request של User בודד עובר בקלות דרך עשרה-חמישה-עשר דברים שונים, כשלא כולם זה בהכרח בבעלותך . . . זאת אומרת, יכול שחלק מהם בבעלות ה-Cloud Provider, חלק מהם אצל איזשהו Hook, נגיד שאתה כותב קובץ ומייצר Hook וכו'.זה ניהיה מורכב . . . איך מטפלים? איך מבינים את המורכבות הזאת? איך מבינים כשיש בעיות?(לירן) זה ניהיה מורכב, זה ניהיה מאוד מורכב . . . דווקא בעולם הזה, כלי ה-Observability שיש לנו היום הם מאוד מאוד טובים.אני לא יודע, ככה . . . חבר'ה בקהל שמקשיבים לנו, האם יצא להם לשמוע את המונח “Observability”זה מונח שמדבר בעצם על היכולת להבין מה קורה במערכת - מבחוץ.להבין האם היא במצב תקין או לא במצב תקיןואולי טיפה למה . . .יש היום אוסף של כלים כאלה, החל מעולמות הלוגים המסורתיים שאליהם אנחנו רגילים, דרך עולמות המטריקות - Prometheus וזה - ועד רמות ה-API, שזה כלים שהם קצת יותר כבדים, שמאפשרים יותר לצלול לעומק, ובאמת לעקוב, ברזולוציה מאוד בסיסית, על הבקשות האלה - בקשות HTTP ובקשות אחרות לאורך המערכת, לראות אילו Services עובדים . . . וכל הכלים האלה נותנים לנו איזשהו פידבק ראשוני, של כמה שגיאות יש במערכת, כמה זמן לוקח למערכת - ואולי גם מכווינים אותנו בערך לאיזו קומפוננטה (Component) עושה בעיות, איזו קומפוננטה חווה קשיים..(רן) אבל שוב, אני אקשה - דיברנו על קשיים . . . - אז אם פעם יכולת ללכת ל-JBoss שלך . . . יונתן, אני יודע שאתה נזכר בזה גם . . .(יונתן) !WebSphere [הוזכרו גם ב-412 Serverless at Via](רן) WebSphere . . . אז אתה יכול ללכת אליו, ולשים שם Breakpoint . . .להגיד “אוקיי, עכשיו אני אשלח Request, ונראה מה קורה ב-Breakpoint”.ועכשיו - אתה אולי, במקרה הטוב, יכול באמת להתחבר ולשים Breakpoint, וגם לא תמיד, אבל בדרך, אתה לא תראה את כל ה-Stack . . . יהיה לך מאוד מאוד קשה להבין מה ה-State שהביא אותך עד לשם, ושוב - לא תמיד אפשר לייצר Breakpoints, ברמה הטכנית.(לירן) זה באמת החסרון הגדול של שימוש בכלי Observability למטרות פיתוח.כלי Observability הם מאוד מאוד Rigid באופי שלהם - צריך להגדיר מראש מה רוצים לעשות, צריך להכניס את הלוגים לקוד, צריך להכניס את המטריקות לקוד . . .הכלי Tracing, דרך אגב - ה-API-ים יודעים לנטר איזשהו overview ראשוני, בעצמם, Out-of-the-Box, אבל מעבר ל-Basic זה, אתה צריך להוסיף בעצמך כל נקודה שאתה רוצה לנטר.וגם בכל פעם שאתה רוצה לשנות - זה אומר לשנות קוד, לעשות re-Deployment . . .עכשיו - כשמדובר על הקוד שלך, במיוחד אם זה רכיב שאתה עובד עליו עכשיו, אז זה לא כזה נוראבטח בסביבת Dev, לבנות את ה-Java, לעשות Transpile ל-JavaScript, לבנות את ה-Container, לעשות לזה Deployment . . . בין חמש לעשרים דקות ואתה מסודר.אבל זה הרבה יותר כואב כשזה לא הקוד שלך - בין אם זה microService ליד, שאותו אתה כבר פחות מכיר - פחות מכיר את ה-Build שלו, פחות מכיר את התהליכים שלו, פחות מכיר את ה-Deployment שלו.וזה יכול להיות גם Open Source . . . זה יכול להיות עכשיו איזשהו קוד Open source, ועכשיו לפתוח את הקוד Open Source הזה בשביל להוסיף Log ולהבין איך עושים Re-build ל-Package ואז את ה-Dependencies שלך להפנות ל-Package שבנית . . . - זה כבר יכול להיות סיפור בהיקף של איזה חצי-יום ויותר, וזה די מבאס.(יונתן) זה אולי מבאס - אבל יש גם הצד השני: עבודה כזאת שאתה עושה - על להוסיף מטריקה במקרים מסויימים, לזרוק Event . . . - זה קשה, אבל זה גם נכסזה נשאר איתךכש-Debugging הוא . . . אתה עושה Debugging, ואחרי זה הוא נעלםבמקרה הטוב הוא נעלם, ולא משאיר אחריו שום Stateוכל הידע שצברת משם הוא כבר לא שם, זאת אומרת - אני לא אומר שלא צריך Debugging בכלל, אבל מבחינתי, מי שפותח Debugger אז זה קצת “מוצא אחרון”', זה אומר שאולי היה חסר לו משהו לפני זהאו שלפעמים, באמת, יש דברים שאי אפשר בלי - אבל זו גישה קצת אחרת.(לירן) אני ממש בעד Observability ואני אוהב Observability, כש-Observability זה בערך אחד הדברים הכי חשוביםשווה להשקיע את הזמן בלבנות Observability מצויין למוצר - על אחת כמה וכמה ב-Productionזה סופר חשוב שהמפתחים שבנו את הקוד יקחו אחריות על זה - שהם יודעים מה קורה איתו, שהוא מדלוור (Delivers) ערך ללקוחות קצה - והדרך היחידה לעשות את זה זה עם לוגים ומטריקות ועם Observability.הבעיה עם זה היא ש-Observability זה הרבה משחק של ניסוי וטעיה - אתה לא תמיד יודע בהתחלה איזה לוג הכי חשוב . . .כמות הפעמים שראיתי בקריירה שלי מפתח שם Log - ובא ואומר “זה סופר-סופר חשוב!” - רק שזה קורה 10,000 פעם בשנייה ומפיל את המערכת, או לפחות מקפיץ את החשבון של ה-Elastic . . .(רן) זו מערכת מאוד חשובה, כנראה, אם היא קוראת 10,000 פעמים בשנייה . . . (יונתן) . . . או שאולי אתה מגלה פתאום שזה לא היה כזה חשוב כמו שחשבת . . .(לירן) או שאתה מגלה שאיזושהי מטריקה . . . אתה רוצה לשלוח איזושהי מטריקה ואתה מגלה ששלחת אותה בשעות במקום בשניות, ועכשיו המערכת Input לא מצליחה לקלוט אותה . . . בסוף, אני חושב שאת ה-Observability הכי טוב הכנסנו כתוצאה מתקלות - ב-Rookout, משהו לא עבד, תחקרנו ותחקרנו והבנו למה זה לא עובד - וגם הבנו איך לשפר את ה-Observability שלנו כדי שבפעם הבאה זה לא יקרה, או כך שנדע על זה יותר מהר ויותר בקלות.זה פשוט תהליך איטרטיבי (Iterative) . . .(רן) את זה כנראה אפשר להגיד על כל דבר בחיים - את הדברים הכי טובים אתה עושה רק אחרי שטעית [ד”ש לדאגלס], אבל כן - אני לגמרי מזדהה עם התופעה הזאת.(לירן) ובהקשר הזה - הטענה שלי כלפי הכלי Observability הקיימים זה פשוט האיטיות . . . המסורבלות . . .כשאני כבר יודע מה אני רוצה, אני אעשה לזה את ה-Commit, אני אעשה לזה את ה-Deployment, והכל יהיה בסדר, בטח למי שיש CI/CD איכותיזה יקח את השעה-שעתייםהבעיה שזה תהליך של ניסוי וטעיה, שלעפמים לוקח לי עשרה או אפילו עשרים ניסיונות לדעת מהי המטריקה שאני צריך לדעת, מה ה-Log שאני צריךואת זה - זה מה שאנחנו ב-Rookout מאמינים - שהרבה יותר כיף וקל ומועיל לעשות את זה באיטרציות זריזותלהצביע על שורה -לקבל ממנה Log; להצביע על שורה - לקבל ממנה מטריקהלראות שזה באמת מה שאתה רוצה, לראות שזה באמת מה שרצית לראותואז בעצם לקבע את זה בכלים כאלה ואחרים כך שזה יגיע באופן קבוע, וישמר את הידע הזה לאורך זמן.(יונתן) אז זה בעיקר כלים כדי להבין מה קורה במערכת או שזה גם כדי לשנות התנהגות - לשנות לוגים או לשנות If-ים? . . .(לירן) הפרספקטיבה שלנו ב-Rookout, מה שאנחנו עושים, זה שאנחנו רוצים להפוך את העולם הזה, של Observability, לדינאמי - שתוכל, כמפתח, לבוא ולהצביע על כל שורה בקוד שלך ולהגיד “אני רוצה לדעת מה קורה פה, אני רוצה לדעת איך הגעתי לפה”גם ברמה ה-Stack trace וגם ברמת Tracing - איפה הבקשה הזאת עברה קודם? מהם הערכים של המשתנים שלי? וגם לנצל את הדברים האלה בצורה קצת יותר חכמה - “תייצר לי פה מטריקה חדשה”, “תייצר לי פה Log חדש”וכשאנחנו גם מבינים שיש פה כל מיני תוספות ומורכבויות נוספות על Productionלמשל: “אני רוצה לראות מה קורה כשמגיעים לשורה הזאת בקוד” - אבל עבור לקוח ספציפי.או - “תראה לי איך הקוד שלי מטפל כשהלקוח הזה שולח לי פה בקשה”“תראה לי איך הקוד שלי מטפל, כשקיבלתי מה -S3 איזושהי הודעת שגיאה”.(רן) נגיד Conditional Breakpoints . . . משתנה ש”כאשר הערך שלו מגיע ל-X אז תעצור”(לירן) כן - הטכנולוגיה שאנחנו אוהבים לקרוא לה “Non-breaking breakpoints”, שזה אומר שנותנים לך בעיה שהיא דומה ל-Breakpoint, מראים לך את מה שה-Breakpoint היה מראה - אבל לא עוצרים לך את הקוד בעצם.(יונתן) עכשיו יותר ויותר שרתים - לפחות כאלה שצריכים להתמודד עם Scale גדול - הם א-סינכרוניים, זאת אומרת שדברים לא בהכרח קורים בסדר שלהם, ה-Stack trace יכול להיראות כמו גיהינום.איך אתם מתמודדים עם זה?[דיברת על אינטל - אז Out-of-Order Execution](לירן) אז אנחנו מתמודדים עם זה בכמה דרכים - הכי משמעותי זה בגדול לעקוב אחרי Request-יםאנחנו מאפשרים לעקוב אחרי ה-Distributed Tracing Information, שאת חלקו אנחנו יודעים לייצר לעצמנו ואת חלקו אפשר בעצם לקבל מכלי APM שונים שאתם משתמשים בהם, אפילו כלי Open Source כמו OpenTracing או OpenTelemetry או OpenCensus וכל המלחמה שהם עושים על התקינה . . .(רן) כי זה הכל אותו הדבר, לא? . . . (לירן) בערך . . . כמו כל תקן טוב.באמת צריך לחשוב, ככה - גם לראות את ה-Stack Trace הקלאסי, של “מאיפה הקוד שלך הגיע?”, אבל גם לראות את ה-Stack Trace הלוגי של ה-Span-ים ושל ה-Trace-ים, של “מאיפה הבקשה הזאת הגיעה?”, “איך היא נכנסה למערכת?” ו”איפה היא בשלב הזה, כרגע?”(רן) אתגר נוסף, שאני בטוח ש . . . - אני בטוח שאפשר להמשיך לדבר על Observability, אבל בוא נמשיך - אז אתגר נוסף שאני יכול לחשוב עליו זה אם מישהו, ונחזור ל-Scenario שהיה לך Web Server אחד ו-Database, והיית משחרר אליו גרסא, אז אתה יודע: היית משחרר גרסא 5 ואחר כך גרסא 6 ואחר כך גרסא 7 . . . אולי 7.1, אולי 7.2 - אבל אוקיי, אתה יודע וכבר די ברור לך מה קורה שם.היום ב-Production, ודרך אגב - זה אולי לא ייחודי ל-Cloud אבל זה ניהיה יותר קל ב-Cloud Native - יש לך הרבה מאוד Services, הרבה מאוד רכיבים אחרים, ולכל אחד יש גרסא אחרת לחלוטין.אני מנחש שב-Outbrain משחררים הרבה מאוד גרסאות ביום . . . (יונתן) נכון - וגם לא כל הזמן לאותו ה-Service יש את אותה הגרסא ב-Production: לפעמים מריצים A/B Testing, אם אתה מריץ כמה Flavour-ים . . .(רן) נכון . . . אז בכל זמן נתון, נגיד לכל Service יש גרסא אחת או שתיים ב-Production - ובנוסף, יש כמה מאות של Services שונים - ובנוסף, יש רכיבים שהם לא שלך, שגם לפעמים מקבלים Update או כל מיני דברים כאלה, מוזרים . . . וקשה מאוד לקבל תמונה קוהרנטית של “רגע, אז מה יש עכשיו ב-Production? איזה קוד נפרש עליו?” . . .(לירן) קשה להחריד . . . למעשה, לפני איזה שנתיים-שלוש, כשהיו לנו אתגרים ראשונים - היה מוצר, התחלנו אצל לקוחות, היו לנו את הפידבקים שלהם - והיינו בשוק מכמה לקוחות מתקשים להבין מה לעזאזל רץ להם ב-Production . . . זאת אומרת - הם בוחרים שרת, מתחילים לשים עליו Breakpoint-ים - וה-Breakpoint-ים לא קופצים להם . . . אז אנחנו אומרים להם: “חבר'ה - זו לא הגרסת קוד שרצה לכם ב-Production”, והם עונים: “זה כן” . . .ואחרי שעתיים של Support אומרים: “חבר'ה, זה לא הקוד שרץ לכם ב-Production, ה-Breakpoint-ים לא קופצים כי אתם מסתכלים על גרסא חדשה, והגרסא ב-Production היא מלפני שבוע” - או הפוך . . .ואחד הדברים הכי משמעותיים שראינו במוצר זה הצורך להביא עבור הלקוחות את הקוד - לא לסמוך על המפתח שנמצא בקצה שיתחיל להבין איזה קוד נמצא עכשיו איפה, אלא שברגע שהוא בוחר שרת או Service או Deployment ב-Kubernetes או Whatever - להראות לו “תקשיב - זה מה שרץ שם כרגע”.יש סיכוי טוב שכבר בזה הוא מצא את ה-Bug, כי בעצם זו בכלל לא הגרסא שהוא חשב . . . ואם לא - אז ברגע שהוא מתחיל לדבג, הוא לפחות רואה בעיניים בעיניים באמת איך הקוד שנמצא שם מתנהג, ולא איך הקוד שהוא חושב שנמצא שם מתנהג . . .(רן) אז איך זה עובד ברמה הטכנית? זאת אומרת - יש את הסיפור המפורסם, אני מניח שהרבה מכירים, על חברת Algo-Trading, שבטעות השאירו איזשהו שרת ב-Data center שלהם עם הגרסא הלא נכונה, וככה הפסידו את המכנסיים והתחתונים שלהם, ופשטו רגל . . .(לירן) 400 מליון דולר . . . (רן) כן . . . אז אני לא זוכר את שם החברה [Knight, הזכרנו בפרק הקודם], אבל בטוח שנמצא את זה ברפרנס [טו-שה . . .] - אז איך זה עובד ברמה הטכנית? זאת אומרת - מה, לכל גרסא יש איזושהי חתימה, ואתה מוצא את החתימה שלה וככה אתה מוצא את ה-Code base?(לירן) אז האמת שפשוט בנינו סט של Best Practices, שאפשר גם למצוא בבלוג שלנו הרבה מהם, על איך לתייג גרסאות - אם זה ברמת ה-CI, לדחוף את ה-Git-Commitבין אם זה של כל מיני קונפיגוקציות של Maven ו-Gradle ו-MSBuild - על איך לקנפג את זה כך שזה ישים בארטיפקטים (Atrifacts) את ה-Hash-ים.בין אם זה ברמת Containers - אנחנו פשוט מצאנו כמה קבצים ב - .git המסתורי הזה שנמצא לכם בכל מקום - רק קחו את השלושה-ארבעה קבצים האלה, זרקו אותם פנימה - ותוכלו לדעת בדיעבד איך ה-Container.חשוב גם להגיד שכחברה אנחנו, כמדיניות, לא נוגעים בקוד-מקור של הלקוחות - אנחנו לא מעבירים source-code ,לא מזיזים Source-code - ולכן זה מאוד חשוב לנו למצוא דרכים שהלקוחות יוכלו לעשות את זה בעצמם, בלי שבעצם הדאטה הזה יעבור דרכנו.(רן) הבנתי . . . (יונתן) תגיד - בעצם, החברה שלך מוכרת מוצרים למהנדסים? אתה CTO, אתה מנהל את המהנדסים האלה . . . מה הם אומרים על המוצר שלכם?(לירן) מה המהנדסים שלנו אומרים? . . .(יונתן) כמשתמשים . . .(לירן) אני חושב שזה אחד הדברים הכי כיפיים, גם כמי שכנראה שמגייס עובדים ומנהל עובדים, אבל גם באופן כללי - לפתח מוצר שאתה מבין מה הוא עושה, שאתה מכיר את ה-User-ים, שאתה רואה את ההנאה שלהם בעיניים - זה מאוד מספק.כשהחבר'ה באים ונפגשים עם לקוחות, ורואים את המפתחים אצל הלקוחות שלנו יושבים באמצע הלילה ומדבגים באגים, ושוברים על זה את הראש - ואז הם רואים איך Rookout עוזר להם - וזה ממש ממש מאיר להם את העיניים.(יונתן) אין כמו לראות אחרים מדבגים באמצע הלילה . . .(לירן) רק בסופ”ש שעבר, לקוח התקשר אלינו, שבאותו לילה הם התעוררו בשתיים בלילה לדבג איזה משהו, הייתה תקלה בProdcution - והם פתרו אותה איתנו ב-15 דקותאז אם כבר העירו אותך בשתיים בלילה כי המערכת נשברה וצריך לטפל בזה - לפחות שזה יקח 15 דקות ולא תישאר ער עד הבוקר . . .(רן) ברור . . . מזל שלא משלמים לכם לפי שעות . . . .בסדר - אז יש אתגרים, אני בטוח שיש עוד, אבל אנחנו מתקרבים לקראת סיום.אז קודם כל - אני בטוח שכל מי שפיתח בסביבה שהיא Cloud-Native מזדהה, לפחות עם חלקם.חלקם פתורים בתעשייה, במידה מסויימת, וחלקם לא - אני בטוח שככל שנפתור, ככה יווצרו עוד בעיות . . . תיהיה לנו עוד עבודה.אבל זה מעניין מאוד, ואתגרים סופר-רלוונטיים למפתחים.לפני השיחה שלנו, לפני שהתחלנו להקליט, סיפרת לנו שהתחלת פודקאסט [!](לירן) כן - אז האמת שבשבוע שעבר הקלטנו את שני הפרקים הראשונים של הפודקאסט שלנו - זה הולך להיות ה-Production-first Mindset, זה השם שלו.אני מאמין שככה - בעוד שבוע-שבועיים תתחילו לראות פרקים של זה ב-Spotify וב-Apple ובכל המקומות האחרים שאתם אוהבים לראות ולקלוט פודקאסטים.אז אתם מוזמנים גם להקשיב לנו.(רן) מעולה - רעיון טוב. והוא יהיה באנגלית?(לירן) הוא יהיה באנגלית - אנחנו מראיינים גם הרבה חבר'ה מהארץ, כל מיני יזמים כמו רון רייטר ואופיר ארליךוגם כל מיני טכנולוגים מחו”ל כמו Steve Chin - חבר'ה מאוד רציניים שיספרו גם מהפרספקטיבה שלהם על האתגרים של ה-Cloud Native ובאופן כללי על האתגרים של להביא קוד ל-Production ומה שזה אומר.(רן) מעולה - אז לירן, תודה רבה! היה מרתק, היה כיף, תודה שבאת. האזנה נעימה ותודה רבה לעופר פורר על התמלול!
An airhacks.fm conversation with Mohamed Taman (@_tamanm) about: AMD PC in 1997 with 200 MHz hot AMD, exploring the DOS and QuickBasic, drawing sceneries, photography as hobby, assembling PCs from parts, AS-400 and RPG, QBasic and C++ on Windows 3.11 and Windows 95, to shutdown windows you had to push the start, Windows Millenium Edition, equations in QBasic, starting with Java 1.1, the Sun Certified Java Programmer certification was hard to pass, impressed with Java, Java hides the low-level boilerplate for convenience, catching up with J2EE 1.4 and Java EE, building mazes with OpenGL and Java, working for Silicon Experts, staring with Sun Enterprise Server, later BEA WebLogic, recreating Struts from scratch, the problem with early EJB, working on JD Edwards, Oracle and Siebel integration, using ADF at Oracle, Sun Microsystems was acquired by Oracle, starting at eFinance, efinance is private, but founded by the government, started a United Nations (UN) project for donations management, Java EE 7 with Glassfish was used as the stack, finding bugs in GlassFish, working with the latest versions in mission critical projects, presenting at JavaOne keynote, JBoss to quarkus migration on openshift, "Java EE: Future Is Now, But Is Not Evenly Distributed Yet" at JDD, scaling with hardware, Mohamed Taman on twitter: @_tamanm
Veljko Krunic is an independent consultant and trainer specializing in data science, big data, and helping his clients get actionable business results from AI. He holds a PhD in computer science from the University of Colorado at Boulder and an additional degree in engineering management from the same institution. His MS degree in engineering management focused on applied statistics, strategic planning, and the use of advanced statistical methods to improve organizational efficiency. He is also a Six Sigma Master Black Belt. Veljko consulted with or taught courses for five of the Fortune 10 companies (as listed in Sept 2019), many of the Fortune 500 companies, and a number of smaller companies, in the areas of enterprise computing, data science, AI, and big data. Before consulting independently, he worked in the PSO organizations of Hortonworks, the SpringSource division of VMware, and the JBoss division of Red Hat. In those positions, he was the main technical consultant on highly visible projects for the top clients of those PSO organizations. ————————————————————————————— Connect with me here: ✉️ My weekly email newsletter: jousef.substack.com
Zane and Justus give their live reaction to every pick of the NFL Draft. Justus looks into his crystal ball and nails almost every pick! Matt joins at the end for final thoughts! It's s long one! www.paydayparlays.com www.instagram.com/paydayparlays www.twitter.com/paydayparlays www.facebook.com/paydayparlays Beats by Trey: https://linktr.ee/tater003 --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-hart-zane-gregory/support
An airhacks.fm conversation with Graeme Rocher (@graemerocher) about: Playing games with 286, playing digger, starting programming with quakec, programming custom explosions for rocket launcher with "shockman", working for a Apache Cocoon company, JavaScript and Java as second languages, programming learning management SYSTEMS with Java, publishing motivated by learning, programming over gaming, using JBoss on the backend, extracting content from Word with Apache POI and Groovy into XML, using XSLT to convert XML into HTML, data driven templates with XSLT, data-driven stylesheets is the way to go, starting with Visual Basic, the raise of Ruby on Rails, starting Groovy on Rails--Grails, groovy and the "method missing", "method missing" was heavily used in gorm, working on SpringData, SpringData and GORM are similar, joining Object Computing, staying small and be successful, with reflection you will use more memory at the runtime, micronaut was started by Graeme Rocher, micronaut is based on annotation processing, there is no "mobile native" development, on Android reflection is not used, better error messages was one of the design goals, micronaut comes with annotation-based introspector, micronaut generates a reflection-like API based on annotation processors, micronaut was announced in March 2018 and opensourced in May 2018, CDI was hard to implement without annotation, micronaut is similar to Spring, micronaut supports JSR-330 and is TCK-compliant, the Bean Validation module, micronaut supports micrometer, micronaut teams grows at Oracle, Visual Studio Code ships with GraalVM Extension Pack and Micronaut support, micronaut and Helidon are developed by multiple teams, Oracle actively supports micronaut, micronaut and GraalVM are great fit, micronaut is complex at compile time, but simple at runtime, helidon will be able to use the Micronaut Data, the JAX-RS with micronaut screencast, Object Computing, Google, Oracle are contributing to micronaut, Graeme Rocher on twitter: @graemerocher
Matt & Zane are joined by former college football player and now drag racer, Justus Houston! Justus gives his background and they discuss college football. Then Matt, Zane, and Justus look at the EPL this weekend, with the Manchester Derby. Next the guys explain the similarities between Arsenal and Virginia Tech football. Justus gives Matt and Zane his thoughts on the upcoming NFL draft and what the Washington Football Team needs. Lastly the guys give their favorite NFL teams' (WFT & ATL) best value pick. http://www.paydayparlays.com http://www.instagram.com/paydayparlays http://www.twitter.com/paydayparlays Beats by Trey http://linktr.ee/tater03 --- Support this podcast: https://podcasters.spotify.com/pod/show/matt-hart-zane-gregory/support
An airhacks.fm conversation with Jan Meznaric (jmezna) about: Windows 98 on Pentium 1, recording a Windows 98 screen with an old VHS camera, enjoying MS Paint and educational games, starting programming with Visual Basic and "Happy New Year", the Linux fascination, creating PHP based websites, making a barcode scanner working during vacations in .net, the superstar programmer at high school, starting with Java 2, enjoying Java EE and GlassFish, joining the Java Enterprise research program at the university, JBoss, input validation with Java Server Pages (JSP), Drools and JBPM, business rules are too hard for business users, Drools debugging is a challenge, the University of Ljubljana, the microservice framework for Java Enterprise solutions, optimising Java EE for cloud native architecture, Glassfish, Payara, WildFly vs. KumuluzEE, "java -jar glassfish.jar", KumuluzEE committers at airhacks.com MUC workshops, KumuluzEE ships with the smallest jar, KumuluzEE JPA / CRUD app starts in a few seconds, exploded JARs, FAT jars and layered JARs are coming, KumuluzEE supports MicroProfile, KumuluzEE supports etcd and consul, KumuluzEE discovers kubernetes services, KumuluzEE comes with useful extensions, ethereum integration, feature flags support, the version export, subscribing to blockchain events, KumuluzEE comes with commercial support, KumuluzEE uses smallrye to implement some MicroProfile APIs, tree vs. flat metrics, configuration change events, peer to peer microservice update strategies, Java project JXTA, wild pigs, peer to peer and octoberfest, creating a Kubernetes ingest controllers Jan Meznaric on github: jmezna
An airhacks.fm conversation with Lukasz Lenart (@lukaszlenart) about: Playing platform games on Commodore VIC-20, the desire to write a game, starting to program on Commodore C 64 in Basic, the airhacks.fm podcast episode about magic: #106 The Open-Closed Principle and Lots of Magic, a series of if-else statements, learning Pascal then Delphi on a PC, writing network tools in Delphi, starting at ZUS and Delphi Automotive Poland automotive, working as network engineer with Novell Netware, running Java on Novell Netware, Java, Netware Directory Services (NDS) and LDAP, Eric Schmidt was CEO at Novell, the Java San Francisco Framework from IBM, using JBuilder for NDS Java development, learning PHP for production monitoring, using PHP with Common Gateway Interface CGI, migrating from PHP to Java, JSP and Struts, discovering robotics as automative engineer, the kuka robots company, combining Struts 1 with Enterprise Java Beans (EJB) for pragmatic reasons, using Struts and Tiles, building production forecasts with Struts 1 for a Manufacturing Execution System (MES), NetBeans Days in Warsaw, Gdansk and Posen, JBoss project for dial tone discovery, starting at SoftwareMill, SoftwareMill created Hibernate Envers, the first contribute to Struts 2 and NetBeans, WebWork was the beginning of Struts 2, WebWork is used by Jira - a special version of Struts, Sony Europe is using Struts, a basic Struts 2 application, Struts 2 and MVC implementation, Struts 2 support CDI Dependency Injection, vuejs vs. struts 2 contributions comparison, using Java backend web frameworks as SSR / Server Side Rendering, disconnecting JSPs from Struts, MicroProfile Training workshop - rewriting the blog engine in a workshop: https://microprofile.training, it doesn't make any sense to run wikipedia as a SPA, the equifax remote code execution and the patch, the OGNL was used to open a port, is there a reason to learn Scala if you Java 16? quarkus as the next generation runtime, Lukasz Lenart on twitter: @lukaszlenart, Lukasz' blog
We talked about Emmanuel's start building an Amazon Concurrent at the French FNAC Company. We spoke about making architectural decisions, experiencing freedom, and Conway's Law. Emmanuel described how he discovered Hibernate, fell into OSS, and finally joined JBoss. We then discussed the acquisition of JBoss by Redhat, the cultural-chances, and the problems that ensued. We finally touched on getting into open source.Here are the links from the show:https://www.twitter.com/emmanuelbernardhttps://emmanuelbernard.com/bloghttps://lescastcodeurs.comhttps://www.redhat.comhttps://quarkus.ioOpen Decision Framework: https://github.com/red-hat-people-team/open-decision-framework & https://www.redhat.com/en/explore/the-open-organization-bookCreditsMusic Aye by Yung Kartz is licensed CC BY-NC-ND 4.0.Your host is Timothée (Tim) Bourguignon, more about him at timbourguignon.fr.Gift the podcast a rating on one of the major platforms https://devjourney.info/subscribe.htmlSupport the podcast, support us on Patreon: https://bit.ly/devjpatreonSupport the show (http://bit.ly/2yBfySB)
An airhacks.fm conversation with Ken Finnigan (@kenfinnigan) about: Commodore 64 in 1984, Commodore 128D in 1986, creating a Star Wars game, approaching the dark star, a Gateway XT with 20 MB hard drive and 640kB RAM, playing with DBase IV, Lotus 1-2-3 and Delphi, implementing software for baseball statistics in 1989, surviving a Giants game in San Francisco, learning C++, Modula 2 and assembly programming at university, the JavaONE session marathon, learning Java in 1999, enjoying Java programming, starting at IBM Global Services Australian, introduction to the enterprise world with PL 1, Job Control Language (JCL), AIX, CICS and CTG, starting to work with Java 1.2 at an insurance company, building a quotation engine in Java, wrapping JNI layer to reuse legacy C++ code, creating the first web UIs with Java with JSPs and Servlets, PowerBuilder and Borland JBuilder, enjoying the look and feel of Visual Age for Java and JBuilder, Symantec Visual Cafe for Java, Sun Studio Java Workshop had the worst look and feel, writing backend integration logic with XSLT and XML in Dublin, Apache FOP and Apache Cocoon, XSLT transformations in browser, enjoying the marquee tag, using SeeBeyond eWay integration in London, switching to chordiant Java EE CRM solution, using XDoclet to generate EJBs, from XDoclet to annotations, wrapping, abstracting and Aspect Oriented Programming framework, it is hard to find business use cases for AOP, J2EE already ships with built-in aspects, enterprise architecture and UML, using IBM Rational Software Modeler for architectures, driving a truck with tapes as migration, the Amazon Snowmobile Truck, never underestimate the bandwidth of a truck full of hard disks, "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway", Andrew S. Tanenbaum, building stock trading platform in Sydney with J2EE, Complex Event Processing (CEP) with J2EE and JBoss, attending JBoss World in Florida and meeting Pete Muir, starting with Seam 2 to write a CRM solution for weddings, contributing to Seam 3, creating annotation-based i18n solution, joining RedHat consulting, migrating from Oracle Application Server to JBoss EAP 5, joining RedHat engineering, leading portlet bridge from JBoss Portal project, starting project LiveOak, apache sling, starting project WildFly Swarm with Bob McWhirter, WildFly Swarm vs. WildFly, WildFly Swarm and WildFly - the size perspective, WildFly Swarm supported hollow jars, hollow jar allows docker layering, WildFly Swarm was renamed to Thorntail, Thorntail 4 was a rewrite of the CDI container, Thorntail 4 codebase was used in Quarkus, Quarkus is the evolutionary leap forward, Quarkus observability and micrometer, working with OpenTelemetry, OpenTelemetry and micrometer, OpenCensus, Eclipse MicroProfile and Metrics, micrometer vs. MicroProfile metrics, GitHub issue regarding custom registry types, airhacks.fm episode with Romain Manni-Bucau #79 Back to Shared Deployments, starting with counters and gauges in MicroProfile, metrics in a Java Message Service (JMS) application, MicroProfile metrics could re-focus on business metrics, services meshes vs. MicroProfile Fault Tolerance, Istio is only able to see the external traffic, implementing business fallbacks with Istio is hard, OpenMetrics and OpenTracing are merging in OpenTelemetry, MicroProfile OpenTracing comes with a single annotation and brings the most added value, Jakarta EE improvements are incremental, Java's project leyden, the MicroProfile online workshop, Jakarta EE and MicroProfile complement each other, GraalVM and JavaScript, pooling with CDI is challenging, MicroProfile as layer on top of Jakarta EE, the smallrye first approach Ken Finnigan on twitter: @kenfinnigan, Ken's blog: kenfinnigan.me
An airhacks.fm conversation with Simon Martinelli (@simas_ch) about: gaming and BASIC programming with C64, reading a Markt and Technik book about C64 programming, building a volleyball tournament application with C64, writing a Visual Basic application for track and field competition, MS Access applications were maintained by business people, maintaining an application for 30 years, no love for Eclipse RCP, Swiss Railways implemented the train disposition system with Eclipse RCP, a disruptive keynote for Swiss Railways, starting with COBOL on mainframe and IMS, mixing COBOL and assembler for performance, serverless programming with COBOL, COBOL security mechanism is nice, mainframe is virtualized and similar to docker, mainframe jobs are like docker containers, database and business logic are not distributed on AS 400, running as much as possible on a single machine could become a best practice, helping to solve the "year 2000 problem", WebSphere with TopLink, Oracle, MQ Series and Swing, the transition from mainframes to WebSphere, replacing MQ Series with Apache Kafka, from "in-memory" remoting to EJB-remoting, using Eclipse SWT for performance reasons, Swing Application Framework was never released, the SWT's problem was OSGi, GlassFish was introduced as a lightweight alternative to WebSphere, Java EE 5 was an lightweight alternative, working together on QLB, the forgotten NetBeans contribution, teaching at the University of Bern, Eclipse's maven integration is still mediocre, heavy IntelliJ, focussing on JBoss performance and OR-mapping, JBoss vs. GlassFish at the University, killer use cases for Camel, transforming EDI into XML, pointless ESBs, shared deployments on JBoss were problematic, Vaadin flow with web components, generating Vaadin frontend on-the-fly, vaadin generates Web Components / Custom Elements for the frontend, exposing metadata via REST, Simon Martinelli on twitter: @simas_ch, Simon's website: 72.services and blog.
An airhacks.fm conversation with Marc Fleury (@docfleury) about: ZX 81 with the rubber keys and 14 years, writing the Death Mission game, sneaking out at night to develop games, the great Apple 2, rediscovering computers during the physics study, simulating lasers on Vax and C, internet over physics at MIT, in the 1990s studying software engineering was waste of time, interest in quantum entanglement, working with Java, SUN and SAP, JBoss was architected by Rickard Öberg, learning Java in 4 years after physics study, working as support engineer at Sun Microsystems, becoming Java evangelist at Sun Microsystems as an accident, nobody wanted to hire a PhD, the birth of JBoss, spending time at SAP research with Hasso Plattner, trying to apply WebLogic to SAP, Sun Microsystems and WebLogic rejected Marc, Marc started an opensource project called: EJBOSS, a letter from Sun lawyers, AOP and EJB were invented at the same time, meta programming and aspect oriented approaches are older than Aspect Oriented Programming (AOP), JBoss is implementation of the AOP architectural ideas, AOP happens also in nature, viruses can program the system without inheritance, EJB 1 was a piece of sh*t, Sun's standards efforts is what industry needed, crazy Rickard Öberg was an alien, opensource internet is the remedy, internet is from the planet to the planet, entering the École Polytechnique - a "special forces" time, opensource had to be free, JBoss was professional opensource, between IBM, SUN and the opensource fanboys, professional opensource: POS -> Piece of Sh*t, AWS in 1997 - 10 years too early, Scott Stark made a distributable product, "walk the path" mantra, Sascha Labourey wrote the JBoss clustering JBoss was developed in the first year by 10 people, great software started with small teams, increasing the team size can decrease the motivation and fun, why JBoss was sold, WildFly version 20 came out, studying system biology, learning about finance, how to keep money as investor, studying music and enjoying techno, working with professor of percussion who worked with Karlheinz Stockhausen, writing Monte Carlo simulations with Java 8 for fun, Java 15 fibers and project Loom, Robert G. Pickel worked for Gemstone, founding: twoprime.io Two Prime FF1 Token - the product was launched at the worst possible day, working with Alexander S. Blum coding keeps you young, writing physics simulations with Java, JBoss vs. WildFly, JBoss vs. Quarkus, shared deployments in microservice and cloud era, invoking the angels an linux diamonds, Marc Fleury on twitter: @docfleury and Marc's company: twoprime.io / @Two_Prime
Redhat est une société qui a été fondée en 1993 et qui est la première a avoir créé un modèle économique viable en fournissant du support sur des softwares open source. En commençant par une distribution Linux, puis avec JBoss, Openstack, Ceph, Keycloak, et plus récemment Openshift, Redhat est aujourd'hui l'un des plus gros contributeur à l'open source.Cependant, contribuer ne signifie pas uniquement écrire du code, mais aussi d'être à l'écoute de ses utilisateurs, de comprendre les problèmes qu'ils cherchent à résoudre, et les difficultés auxquelles ils sont confrontés. A cette fin, Redhat a besoin de maintenir un lien étroit avec les développeurs.C'est pour cette raison que le rôle de developer advocate a fait son apparition. Un developer advocate est lui-même un ingénieur software qui trouve tout autant de plaisir à écrire du code qu'à échanger avec sa communauté, que ce soit au travers de blogs, de meetups ou de conférences.Dans cet épisode, j'ai le plaisir de recevoir Sébastien Blanc. Sébastien est directeur de l'expérience utilisateur pour Redhat, et il nous explique les tenants et les aboutissants d'une profession encore jeune et qui pourrait bien faire naître des vocations !Notes de l'épisodeLes dessous de l'organisation d'une conférence avec Pierre-Antoine Grégoire et Gildas Cuisinier : https://electro-monkeys.fr/?p=294Apprendre Kubernetes avec Jérôme Petazzoni : https://electro-monkeys.fr/?p=217Le refactoring le plus difficile de ma carrière - Jérôme Petazzoni : https://www.youtube.com/watch?v=fu7Tsv5qPGQ&t=118s&index=10&list=PLhuKb8VM9ELFxHghhttrTef-Z4Fxj4ihVSupport the show (https://www.patreon.com/electromonkeys)
An airhacks.fm conversation with Jason Porter (@lightguardjp) about: From old 8086 in the late 80-ties, to a Pentium, old GW-BASIC games like snake and gorillas, finding game source by accident, learning Java in 21 days - with a book, fascination with Java Applets, learning C++ at middle school, writing C code with Metrowerks CodeWarrior, learning pointers with 14, building OCR in C at high school, Pearl and PHP before Neumont University, contributing to FlySpray the bugtracker, building inventory application with C# and WinForms, building a scrapbook with full-text search in 10 weeks, accessing lucene from C#, first Java project for the State of Utah with JBoss Portal, a JDBC wrapper around LDAP, building a client library to wrap SOAP, curiosity about Java EE 5, creating student portfolios with Java EE 5, EJB 3, JSF and GlassFish, commercial support was available from Sun Microsystems for Glassfish, there was a lag between JBoss and WildFly versions, working with ATG dynamo for oc tanner, accelerating ETL and data validation with Java EE 5 and JMS, increasing performance with JBoss from a day to one and half hour, joining the Seam Team at RedHat, Seam Solder became Apache Delta Spike, DeltaSpike became the groundwork for e.g. MicroProfile Config, Injection, Outjection and Bijection, from Java to Ruby, from Ruby to Drupal, form Drupal back to Java and Quarkus, asciidoc is like markdown, but better, contributing to Quarkus, joining forces with Alex Soto for Quarkus Cookbook, Kubernetes operators with Quarkus, why lightguard (@lightguardjp)?, Jason Porter on twitter: @lightguardjp and linkedin
An airhacks.fm conversation with Andrew Lee Rubinger (@alrubinger) about: GW-BASIC to reprogram a classic piece of music with the sound command, playing games in a spreadsheet of lotus 1-2-3, CDs or MP3s, the undeclared student, studying music production in New York, excited about the the intentionally difficult programming class in Massachusetts, learning Java in early 2000's, discovering Java servers, JBoss 2x and Java EE is the coolest thing, programming Monte Carlo simulations to pay for a flight, becoming a global publisher with the web, chatting over speaking, self-study addiction, long coding nights, a music streaming client with Java EE backend, building an educational, grade online tracking system, JBoss was free and it didn't suck, contributing patches to EJB container, a hard job interview at JBoss, creating the ShrinkWrap library, creating Arquillian, Arquillian's strength are integration and system tests with the ease of unit tests, with ShrinkWrap you can provide multiple deployments, the use cases for grey box tests, testing transactions is tricky, starting the DevNation conference, from application servers to kubernetes, containers and clouds, reasonable Java EE 6 applications should work in the clouds without any major modifications, 5mins from nothing to the first DB access, the time to "hello, world", from configuring everything to convention over configuration Andrew Lee Rubinger on twitter: @alrubinger, linkedin and github
IBM has taken the next step in its ongoing transformation with the $34 billion acquisition of Linux, JBoss, and OpenShift provider Red Hat. With a long history of working together, there is no doubt about the synergy. But can this really boost IBM's status as a true cloud provider (and the go-to team for hybrid cloud migrations)? Plus our Fast Five: Earnings from Apple and Spotify; Browser extensions that steal user data; Apple's new iPad, Mini, and MacBook offerings; Cyber thieves love Microsoft Office; and Apple Maps tops Google Maps (sort of). Our Tech Bites Winner this week: Facebook's attempt to vet political advertisers falls short. Completely short. And our Crystal Ball this week takes a look into the realistic chances of the US government forcing social giants like Facebook, Google, Twitter, and YouTube to fix the #fakenews issue. This episode features: Daniel Newman (@danielnewmanUV), Fred McClimans (@fredmcclimans), and Olivier Blanchard (@oablanchard). If you haven't already, please subscribe to our show on iTunes or SoundCloud. For inquiries or more information on the show you may email the team at info@futurumresearch.com or follow @FuturumXYZ on Twitter and feel free to direct inquires through that channel as well. To learn more about Future research please visit www.futurumresearch.com. As a reminder, the Futurum Tech Podcast is intended as an informational newsletter only. No investment advice is offered. While equities are frequently discussed, no investment advice is offered or implied.
The challenges of programming with different languages plus new releases from JBoss, Spring, PrimeFaces, RichFaces, TypeScript, git, WebSphere, and more.
Kito, Ian, and Daniel discuss JavaOne, JSF, web frameworks, mobile development, Internet of Things, and new JBoss, Apache, PrimeFaces, Spring, MySQL, and TypeSafe releases.
Kito, Ian, and Daniel discuss new releases from Spring, JBoss, ICEsoft, PrimeFaces, Apache, IBM, and TypeSafe.
Kito, Ian, and Daniel cover new releases from Spring, MyFaces, TypeSafe, JBoss, Oracle, and Apache.
Kito, Ian, and Daniel cover new releases from Spring, MyFaces, ICEsoft, JBoss, Oracle, and Apache.
Kito, Ian, and Daniel cover new releases from Spring, PrimeFaces, ICEsoft, JBoss, IBM, Oracle, Apache, and TypeSafe.
Kito, Ian, and Daniel cover new releases from SpringSource, PrimeFaces, ICEsoft, JBoss, IBM, Oracle, and TypeSafe.
Kito, Ian, and Daniel cover new releases from Apache, PrimeFaces, SpringSource, ICEsoft, JBoss, IBM, Oracle, Google, and more.
Kito, Ian, and Daniel cover new releases from Oracle, IBM, SpringSource, PrimeFaces, ICEfaces, Apache, JBoss, NetBeans, eXo Platform, and more.
Kito, Ian, and Daniel cover new releases from SpringSource, PrimeFaces, ICEfaces, Apache, JBoss, Eclipse, and TypeSafe.
Kito, Ian, and Daniel cover new releases from SpringSource, PrimeFaces, ICEfaces, Apache, JBoss, and Liferay.
Kito, Ian, and Daniel discuss new releases from JBoss, SpringSource, MyFaces, PrimeFaces, ICEfaces, JSFToolbox, and Oracle, plus highlights from JavaOne and JSF performance improvements.
Kito, Ian, and Daniel discuss JAXConf/JSF Summit 2012, Java 8, WebSphere Liberty Profile, Arquillian, and new releases from MyFaces, Spring, JBoss, ICEfaces, RichFaces, Tomcat, and more.