Open-source in-memory key–value database
POPULARITY
Categories
Demetrios, Sam Partee, and Rahul Parundekar unpack the chaos of AI agent tools and the evolving world of MCP (Model Context Protocol). With sharp insights and plenty of laughs, they dig into tool permissions, security quirks, agent memory, and the messy path to making agents actually useful.// BioSam ParteeSam Partee is the CTO and Co-Founder of Arcade AI. Previously a Principal Engineer leading the Applied AI team at Redis, Sam led the effort in creating the ecosystem around Redis as a vector database. He is a contributor to multiple OSS projects including Langchain, DeterminedAI, LlamaIndex and Chapel amongst others. While at Cray/HPE he created the SmartSim AI framework which is now used at national labs around the country to integrate HPC simulations like climate models with AI. Rahul ParundekarRahul Parundekar is the founder of AI Hero. He graduated with a Master's in Computer Science from USC Los Angeles in 2010, and embarked on a career focused on Artificial Intelligence. From 2010-2017, he worked as a Senior Researcher at Toyota ITC working on agent autonomy within vehicles. His journey continued as the Director of Data Science at FigureEight (later acquired by Appen), where he and his team developed an architecture supporting over 36 ML models and managing over a million predictions daily. Since 2021, he has been working on AI Hero, aiming to democratize AI access, while also consulting on LLMOps(Large Language Model Operations), and AI system scalability. Other than his full time role as a founder, he is also passionate about community engagement, and actively organizes MLOps events in SF, and contributes educational content on RAG and LLMOps at learn.mlops.community.// Related LinksWebsites: arcade.devaihero.studio~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rahul on LinkedIn: /rparundekarConnect with Sam on LinkedIn: /samparteeTimestamps:[00:00] Agents & Tools, Explained (Without Melting Your Brain)[09:51] MVP Servers: Why Everything's on Fire (and How to Fix It)[13:18] Can We Actually Trust the Protocol?[18:13] KYC, But Make It AI (and Less Painful)[25:25] Web Automation Tests: The Bugs Strike Back[28:18] MCP Dev: What Went Wrong (and What Saved Us)[33:53] Social Login: One Button to Rule Them All[39:33] What Even Is an AI-Native Developer?[42:21] Betting Big on Smarter Models (High Risk, High Reward)[51:40] Harrison's Bold New Tactic (With Real-Life Magic Tricks)[55:31] Async Task Handoffs: Herding Cats, But Digitally[1:00:37] Getting AI to Actually Help Your Workflow[1:03:53] The Infamous Varma System Error (And How We Dodge It)
video: https://youtu.be/23-FYQvEqZQ In this episode, we discuss community feedback about the challenges of anti-cheat systems regarding privacy and security. Then we talk about Anduin OS and Linux distros that look a lot like Windows. Plus, the implications of Redis' licensing shift back to open source. Jill updates us on the Open Source Lab's successful funding campaign, and we share a lively discussion about "Floorp," a Firefox-based browser focused on customization and tracking protection. Forum Discussion Thread (https://destinationlinux.net/forum) Download as MP3 (https://aphid.fireside.fm/d/1437767933/32f28071-0b08-4ea1-afcc-37af75bd83d6/edb63036-e701-4eff-b9b7-32878acf4c67.mp3) Support the show by becoming a patron at tuxdigital.com/membership (https://tuxdigital.com/membership) or get some swag at tuxdigital.com/store (https://tuxdigital.com/store) Hosted by: Ryan (DasGeek) = dasgeek.net (https://dasgeek.net) Jill Bryant = jilllinuxgirl.com (https://jilllinuxgirl.com) Michael Tunnell = michaeltunnell.com (https://michaeltunnell.com) Chapters: 00:00:00 Intro 00:01:27 Community Feedback: Rootkits, Anti-Cheat, and Apple's Open Source Claims 00:07:20 When Anti-Cheat Starts Looking Like Malware 00:17:54 Sandfly Security 00:20:18 Preview: Red Hat Summit Coverage Plans 00:23:21 Distro Talk: Anduin OS, a Windows 11 Lookalike 00:28:24 Microsoft's Next Move? Nobody Knows, Not Even Microsoft 00:29:47 Imitating Windows 11: Helpful Onboarding or User Trap? 00:34:02 Please Add Clippy 00:35:23 Big Profits, Bigger Cuts: Microsoft's 3% Workforce Reduction 00:40:11 Final Thoughts on Anduin OS 00:41:24 Open Source Lab Reaches Funding Goal – What's Next 00:47:26 Redis Flips Back to Open Source – Too Little, Too Late? 00:54:43 Software Spotlight: Floorp – A Firefox-Based Browser 00:56:59 Judging Browsers by Their Websites—Because We Can 00:58:11 Sea Monkey Website Roast & Browser Tangents 00:58:44 Fun Fact: Floorp Powers Garuda's FireDragon Browser 00:59:04 SeaMonkey Browser: Stuck in Time, Just Like Its Website 00:59:46 Dial-Up Vibes Only: Exploring Netscape's Classic ISP Site 01:01:01 SeaMonkey Nostalgia: Childhood Pets That Didn't Do Much 01:02:05 Making Friends & Sea Monkey Sponsorship Pitch 01:03:13 SeaMonkey Browser: Channeling That Pet Shrimp Energy 01:04:17 Join the TuxDigital Discord—Your Next Linux Friend Awaits! 01:05:51 Membership Perks & Upcoming Merch Tease 01:07:13 Ryan's SeaMonkey Obsession—And He Doesn't Even Know What They Are 01:08:23 Outro 01:10:09 Post Show Links: https://www.anduinos.com/ (https://www.anduinos.com/) https://news.anduinos.com/post/2025/5/6/story-behind-anduinos-a-letter-from-anduin (https://news.anduinos.com/post/2025/5/6/story-behind-anduinos-a-letter-from-anduin) https://osuosl.org/blog/osl-future-update/ (https://osuosl.org/blog/osl-future-update/) https://osuosl.org/donate/ (https://osuosl.org/donate/) https://www.zdnet.com/article/redis-returns-to-open-source-with-agplv3-license-but-not-everyone-is-happy/ (https://www.zdnet.com/article/redis-returns-to-open-source-with-agplv3-license-but-not-everyone-is-happy/) https://floorp.app/en-US (https://floorp.app/en-US) https://www.seamonkey-project.org/ (https://www.seamonkey-project.org/) https://sea-monkeys.com/ (https://sea-monkeys.com/)
Roman Gershman is Co-Founder & CTO of Dragonfly, the drop-in Redis replacement for heavy data workloads that has significant performance, cost, and scale benefits. Their open source dragonflydb has 28K stars on GitHub. Dragonfly has raised $21M from investors including Quiet Capital and Redpoint. In this episode, we dig into:The challenges with Redis The users that have really benefitted from Dragonfly (high scale + real-time needs - gaming, B2C) The benefits of being multi-threaded How they got some of their bigger users / customers like Twilio, SoFi, and Spotify
Redis 8, NATS, MCP i dramaty licencyjne czekają w nowym Short #72! Frameworki wspierają Model Context Protocol, a Redis przechodzi na wirusową licencję AGPL. Zespół analizuje kontrowersje NATS vs CNCF. Łukasz krytykuje automatyczne wystawianie endpointów REST jako narzędzi MCP. Szymon prezentuje techniki zarządzania czasem - od Macierzy Eisenhowera po metodę Pomodoro. Prowadzący bezlitośnie punktują błędy w raporcie InfoQ o trendach. Zastanawiasz się, jak uniknąć problemów licencyjnych w swoich projektach? Sprawdź, czy nie używasz wirusowego Redisa! A może warto zastosować technikę 3x3x3 do planowania zadań? Ten odcinek pomoże Ci zoptymalizować czas i uniknąć pato-architektury! A teraz nie ma co się obijać!
Gros épisode qui couvre un large spectre de sujets : Java, Scala, Micronaut, NodeJS, l'IA et la compétence des développeurs, le sampling dans les LLMs, les DTO, le vibe coding, les changements chez Broadcom et Red Hat ainsi que plusieurs nouvelles sur les licences open source. Enregistré le 7 mai 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-325.mp3 ou en vidéo sur YouTube. News Langages A l'occasion de JavaOne et du lancement de Java 24, Oracle lance un nouveau site avec des ressources vidéo pour apprendre le langage https://learn.java/ site plutôt à destination des débutants et des enseignants couvre la syntaxe aussi, y compris les ajouts plus récents comme les records ou le pattern matching c'est pas le site le plus trendy du monde. Martin Odersky partage un long article sur l'état de l'écosystème Scala et les évolutions du language https://www.scala-lang.org/blog/2025/03/24/evolving-scala.html Stabilité et besoin d'évolution : Scala maintient sa position (~14ème mondial) avec des bases techniques solides, mais doit évoluer face à la concurrence pour rester pertinent. Axes prioritaires : L'évolution se concentre sur l'amélioration du duo sécurité/convivialité, le polissage du langage (suppression des “rugosités”) et la simplification pour les débutants. Innovation continue : Geler les fonctionnalités est exclu ; l'innovation est clé pour la valeur de Scala. Le langage doit rester généraliste et ne pas se lier à un framework spécifique. Défis et progrès : L'outillage (IDE, outils de build comme sbt, scala-cli, Mill) et la facilité d'apprentissage de l'écosystème sont des points d'attention, avec des améliorations en cours (partenariat pédagogique, plateformes simples). Des strings encore plus rapides ! https://inside.java/2025/05/01/strings-just-got-faster/ Dans JDK 25, la performance de la fonction String::hashCode a été améliorée pour être principalement constant foldable. Cela signifie que si les chaînes de caractères sont utilisées comme clés dans une Map statique et immuable, des gains de performance significatifs sont probables. L'amélioration repose sur l'annotation interne @Stable appliquée au champ privé String.hash. Cette annotation permet à la machine virtuelle de lire la valeur du hash une seule fois et de la considérer comme constante si elle n'est pas la valeur par défaut (zéro). Par conséquent, l'opération String::hashCode peut être remplacée par la valeur de hash connue, optimisant ainsi les lookups dans les Map immuables. Un cas limite est celui où le code de hachage de la chaîne est zéro, auquel cas l'optimisation ne fonctionne pas (par exemple, pour la chaîne vide “”). Bien que l'annotation @Stable soit interne au JDK, un nouveau JEP (JEP 502: Stable Values (Preview)) est en cours de développement pour permettre aux utilisateurs de bénéficier indirectement de fonctionnalités similaires. AtomicHash, une implémentation Java d'une HashMap qui est thread-safe, atomique et non-bloquante https://github.com/arxila/atomichash implémenté sous forme de version immutable de Concurrent Hash Trie Librairies Sortie de Micronaut 4.8.0 https://micronaut.io/2025/04/01/micronaut-framework-4-8-0-released/ Mise à jour de la BOM (Bill of Materials) : La version 4.8.0 met à jour la BOM de la plateforme Micronaut. Améliorations de Micronaut Core : Intégration de Micronaut SourceGen pour la génération interne de métadonnées et d'expressions bytecode. Nombreuses améliorations dans Micronaut SourceGen. Ajout du traçage de l'injection de dépendances pour faciliter le débogage au démarrage et à la création des beans. Nouveau membre definitionType dans l'annotation @Client pour faciliter le partage d'interfaces entre client et serveur. Support de la fusion dans les Bean Mappers via l'annotation @Mapping. Nouvelle liveness probe détectant les threads bloqués (deadlocked) via ThreadMXBean. Intégration Kubernetes améliorée : Mise à jour du client Java Kubernetes vers la version 22.0.1. Ajout du module Micronaut Kubernetes Client OpenAPI, offrant une alternative au client officiel avec moins de dépendances, une configuration unifiée, le support des filtres et la compatibilité Native Image. Introduction d'un nouveau runtime serveur basé sur le serveur HTTP intégré de Java, permettant de créer des applications sans dépendances serveur externes. Ajout dans Micronaut Micrometer d'un module pour instrumenter les sources de données (traces et métriques). Ajout de la condition condition dans l'annotation @MetricOptions pour contrôler l'activation des métriques via une expression. Support des Consul watches dans Micronaut Discovery Client pour détecter les changements de configuration distribuée. Possibilité de générer du code source à partir d'un schéma JSON via les plugins de build (Gradle et Maven). Web Node v24.0.0 passe en version Current: https://nodejs.org/en/blog/release/v24.0.0 Mise à jour du moteur V8 vers la version 13.6 : intégration de nouvelles fonctionnalités JavaScript telles que Float16Array, la gestion explicite des ressources (using), RegExp.escape, WebAssembly Memory64 et Error.isError. npm 11 inclus : améliorations en termes de performance, de sécurité et de compatibilité avec les packages JavaScript modernes. Changement de compilateur pour Windows : abandon de MSVC au profit de ClangCL pour la compilation de Node.js sur Windows. AsyncLocalStorage utilise désormais AsyncContextFrame par défaut : offrant une gestion plus efficace du contexte asynchrone. URLPattern disponible globalement : plus besoin d'importer explicitement cette API pour effectuer des correspondances d'URL. Améliorations du modèle de permissions : le flag expérimental --experimental-permission devient --permission, signalant une stabilité accrue de cette fonctionnalité. Améliorations du test runner : les sous-tests sont désormais attendus automatiquement, simplifiant l'écriture des tests et réduisant les erreurs liées aux promesses non gérées. Intégration d'Undici 7 : amélioration des capacités du client HTTP avec de meilleures performances et un support étendu des fonctionnalités HTTP modernes. Dépréciations et suppressions : Dépréciation de url.parse() au profit de l'API WHATWG URL. Suppression de tls.createSecurePair. Dépréciation de SlowBuffer. Dépréciation de l'instanciation de REPL sans new. Dépréciation de l'utilisation des classes Zlib sans new. Dépréciation du passage de args à spawn et execFile dans child_process. Node.js 24 est actuellement la version “Current” et deviendra une version LTS en octobre 2025. Il est recommandé de tester cette version pour évaluer son impact sur vos applications. Data et Intelligence Artificielle Apprendre à coder reste crucial et l'IA est là pour venir en aide : https://kyrylo.org/software/2025/03/27/learn-to-code-ignore-ai-then-use-ai-to-code-even-better.html Apprendre à coder reste essentiel malgré l'IA. L'IA peut assister la programmation. Une solide base est cruciale pour comprendre et contrôler le code. Cela permet d'éviter la dépendance à l'IA. Cela réduit le risque de remplacement par des outils d'IA accessibles à tous. L'IA est un outil, pas un substitut à la maîtrise des fondamentaux. Super article de Anthropic qui essaie de comprendre comment fonctionne la “pensée” des LLMs https://www.anthropic.com/research/tracing-thoughts-language-model Effet boîte noire : Stratégies internes des IA (Claude) opaques aux développeurs et utilisateurs. Objectif : Comprendre le “raisonnement” interne pour vérifier capacités et intentions. Méthode : Inspiration neurosciences, développement d'un “microscope IA” (regarder quels circuits neuronaux s'activent). Technique : Identification de concepts (“features”) et de “circuits” internes. Multilinguisme : Indice d'un “langage de pensée” conceptuel commun à toutes les langues avant de traduire dans une langue particulière. Planification : Capacité à anticiper (ex: rimes en poésie), pas seulement de la génération mot par mot (token par token). Raisonnement non fidèle : Peut fabriquer des arguments plausibles (“bullshitting”) pour une conclusion donnée. Logique multi-étapes : Combine des faits distincts, ne se contente pas de mémoriser. Hallucinations : Refus par défaut ; réponse si “connaissance” active, sinon risque d'hallucination si erreur. “Jailbreaks” : Tension entre cohérence grammaticale (pousse à continuer) et sécurité (devrait refuser). Bilan : Méthodes limitées mais prometteuses pour la transparence et la fiabilité de l'IA. Le “S” dans MCP veut dire Securité (ou pas !) https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands-for-security-91407b33ed6b La spécification MCP pour permettre aux LLMs d'avoir accès à divers outils et fonctions a peut-être été adoptée un peu rapidement, alors qu'elle n'était pas encore prête niveau sécurité L'article liste 4 types d'attaques possibles : vulnérabilité d'injection de commandes attaque d'empoisonnement d'outils redéfinition silencieuse de l'outil le shadowing d'outils inter-serveurs Pour l'instant, MCP n'est pas sécurisé : Pas de standard d'authentification Pas de chiffrement de contexte Pas de vérification d'intégrité des outils Basé sur l'article de InvariantLabs https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks Sortie Infinispan 15.2 - pre rolling upgrades 16.0 https://infinispan.org/blog/2025/03/27/infinispan-15-2 Support de Redis JSON + scripts Lua Métriques JVM désactivables Nouvelle console (PatternFly 6) Docs améliorées (métriques + logs) JDK 17 min, support JDK 24 Fin du serveur natif (performances) Guillaume montre comment développer un serveur MCP HTTP Server Sent Events avec l'implémentation de référence Java et LangChain4j https://glaforge.dev/posts/2025/04/04/mcp-client-and-server-with-java-mcp-sdk-and-langchain4j/ Développé en Java, avec l'implémentation de référence qui est aussi à la base de l'implémentation dans Spring Boot (mais indépendant de Spring) Le serveur MCP est exposé sous forme de servlet dans Jetty Le client MCP lui, est développé avec le module MCP de LangChain4j c'est semi independant de Spring dans le sens où c'est dépendant de Reactor et de ses interface. il y a une conversation sur le github d'anthropic pour trouver une solution, mais cela ne parait pas simple. Les fallacies derrière la citation “AI won't replace you, but humans using AI will” https://platforms.substack.com/cp/161356485 La fallacie de l'automatisation vs. l'augmentation : Elle se concentre sur l'amélioration des tâches existantes avec l'IA au lieu de considérer le changement de la valeur de ces tâches dans un nouveau système. La fallacie des gains de productivité : L'augmentation de la productivité ne se traduit pas toujours par plus de valeur pour les travailleurs, car la valeur créée peut être capturée ailleurs dans le système. La fallacie des emplois statiques : Les emplois sont des constructions organisationnelles qui peuvent être redéfinies par l'IA, rendant les rôles traditionnels obsolètes. La fallacie de la compétition “moi vs. quelqu'un utilisant l'IA” : La concurrence évolue lorsque l'IA modifie les contraintes fondamentales d'un secteur, rendant les compétences existantes moins pertinentes. La fallacie de la continuité du flux de travail : L'IA peut entraîner une réimagination complète des flux de travail, éliminant le besoin de certaines compétences. La fallacie des outils neutres : Les outils d'IA ne sont pas neutres et peuvent redistribuer le pouvoir organisationnel en changeant la façon dont les décisions sont prises et exécutées. La fallacie du salaire stable : Le maintien d'un emploi ne garantit pas un salaire stable, car la valeur du travail peut diminuer avec l'augmentation des capacités de l'IA. La fallacie de l'entreprise stable : L'intégration de l'IA nécessite une restructuration de l'entreprise et ne se fait pas dans un vide organisationnel. Comprendre le “sampling” dans les LLMs https://rentry.co/samplers Explique pourquoi les LLMs utilisent des tokens Les différentes méthodes de “sampling” : càd de choix de tokens Les hyperparamètres comme la température, top-p, et leur influence réciproque Les algorithmes de tokenisation comme Byte Pair Encoding et SentencePiece. Un de moins … OpenAI va racheter Windsurf pour 3 milliards de dollars. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion l'accord n'est pas encore finalisé Windsurf était valorisé à 1,25 milliards l'an dernier et OpenAI a levé 40 milliards dernièrement portant sa valeur à 300 milliards Le but pour OpenAI est de rentrer dans le monde des assistants de code pour lesquels ils sont aujourd'hui absent Docker desktop se met à l'IA… ? Une nouvelle fonctionnalité dans docker desktop 4.4 sur macos: Docker Model Runner https://dev.to/docker/run-genai-models-locally-with-docker-model-runner-5elb Permet de faire tourner des modèles nativement en local ( https://docs.docker.com/model-runner/ ) mais aussi des serveurs MCP ( https://docs.docker.com/ai/mcp-catalog-and-toolkit/ ) Outillage Jetbrains défend la suppression des commentaires négatifs sur son assistant IA https://devclass.com/2025/04/30/jetbrains-defends-removal-of-negative-reviews-for-unpopular-ai-assistant/?td=rt-3a L'IA Assistant de JetBrains, lancée en juillet 2023, a été téléchargée plus de 22 millions de fois mais n'est notée que 2,3 sur 5. Des utilisateurs ont remarqué que certaines critiques négatives étaient supprimées, ce qui a provoqué une réaction négative sur les réseaux sociaux. Un employé de JetBrains a expliqué que les critiques ont été supprimées soit parce qu'elles mentionnaient des problèmes déjà résolus, soit parce qu'elles violaient leur politique concernant les “grossièretés, etc.” L'entreprise a reconnu qu'elle aurait pu mieux gérer la situation, un représentant déclarant : “Supprimer plusieurs critiques d'un coup sans préavis semblait suspect. Nous aurions dû au moins publier un avis et fournir plus de détails aux auteurs.” Parmi les problèmes de l'IA Assistant signalés par les utilisateurs figurent : un support limité pour les fournisseurs de modèles tiers, une latence notable, des ralentissements fréquents, des fonctionnalités principales verrouillées aux services cloud de JetBrains, une expérience utilisateur incohérente et une documentation insuffisante. Une plainte courante est que l'IA Assistant s'installe sans permission. Un utilisateur sur Reddit l'a qualifié de “plugin agaçant qui s'auto-répare/se réinstalle comme un phénix”. JetBrains a récemment introduit un niveau gratuit et un nouvel agent IA appelé Junie, destiné à fonctionner parallèlement à l'IA Assistant, probablement en réponse à la concurrence entre fournisseurs. Mais il est plus char a faire tourner. La société s'est engagée à explorer de nouvelles approches pour traiter les mises à jour majeures différemment et envisage d'implémenter des critiques par version ou de marquer les critiques comme “Résolues” avec des liens vers les problèmes correspondants au lieu de les supprimer. Contrairement à des concurrents comme Microsoft, AWS ou Google, JetBrains commercialise uniquement des outils et services de développement et ne dispose pas d'une activité cloud distincte sur laquelle s'appuyer. Vos images de README et fichiers Markdown compatibles pour le dark mode de GitHub: https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/ Seulement quelques lignes de pure HTML pour le faire Architecture Alors, les DTOs, c'est bien ou c'est pas bien ? https://codeopinion.com/dtos-mapping-the-good-the-bad-and-the-excessive/ Utilité des DTOs : Les DTOs servent à transférer des données entre les différentes couches d'une application, en mappant souvent les données entre différentes représentations (par exemple, entre la base de données et l'interface utilisateur). Surutilisation fréquente : L'article souligne que les DTOs sont souvent utilisés de manière excessive, notamment pour créer des API HTTP qui ne font que refléter les entités de la base de données, manquant ainsi l'opportunité de composer des données plus riches. Vraie valeur : La valeur réelle des DTOs réside dans la gestion du couplage entre les couches et la composition de données provenant de sources multiples en formes optimisées pour des cas d'utilisation spécifiques. Découplage : Il est suggéré d'utiliser les DTOs pour découpler les modèles de données internes des contrats externes (comme les API), ce qui permet une évolution et une gestion des versions indépendantes. Exemple avec CQRS : Dans le cadre de CQRS (Command Query Responsibility Segregation), les réponses aux requêtes (queries) agissent comme des DTOs spécifiquement adaptés aux besoins de l'interface utilisateur, pouvant inclure des données de diverses sources. Protection des données internes : Les DTOs aident à distinguer et protéger les modèles de données internes (privés) des changements externes (publics). Éviter l'excès : L'auteur met en garde contre les couches de mapping excessives (mapper un DTO vers un autre DTO) qui n'apportent pas de valeur ajoutée. Création ciblée : Il est conseillé de ne créer des DTOs que lorsqu'ils résolvent des problèmes concrets, tels que la gestion du couplage ou la facilitation de la composition de données. Méthodologies Même Guillaume se met au “vibe coding” https://glaforge.dev/posts/2025/05/02/vibe-coding-an-mcp-server-with-micronaut-and-gemini/ Selon Andrey Karpathy, c'est le fait de POC-er un proto, une appli jetable du weekend https://x.com/karpathy/status/1886192184808149383 Mais Simon Willison s'insurge que certains confondent coder avec l'assistance de l'IA avec le vibe coding https://simonwillison.net/2025/May/1/not-vibe-coding/ Guillaume c'est ici amusé à développer un serveur MCP avec Micronaut, en utilisant Gemini, l'IA de Google. Contrairement à Quarkus ou Spring Boot, Micronaut n'a pas encore de module ou de support spécifique pour faciliter la création de serveur MCP Sécurité Une faille de sécurité 10/10 sur Tomcat https://www.it-connect.fr/apache-tomcat-cette-faille-activement-exploitee-seulement-30-heures-apres-sa-divulgation-patchez/ Une faille de sécurité critique (CVE-2025-24813) affecte Apache Tomcat, permettant l'exécution de code à distance Cette vulnérabilité est activement exploitée seulement 30 heures après sa divulgation du 10 mars 2025 L'attaque ne nécessite aucune authentification et est particulièrement simple à exécuter Elle utilise une requête PUT avec une charge utile Java sérialisée encodée en base64, suivie d'une requête GET L'encodage en base64 permet de contourner la plupart des filtres de sécurité Les serveurs vulnérables utilisent un stockage de session basé sur des fichiers (configuration répandue) Les versions affectées sont : 11.0.0-M1 à 11.0.2, 10.1.0-M1 à 10.1.34, et 9.0.0.M1 à 9.0.98 Les mises à jour recommandées sont : 11.0.3+, 10.1.35+ et 9.0.99+ Les experts prévoient des attaques plus sophistiquées dans les prochaines phases d'exploitation (upload de config ou jsp) Sécurisation d'un serveur ssh https://ittavern.com/ssh-server-hardening/ un article qui liste les configurations clés pour sécuriser un serveur SSH par exemple, enlever password authentigfication, changer de port, desactiver le login root, forcer le protocol ssh 2, certains que je ne connaissais pas comme MaxStartups qui limite le nombre de connections non authentifiées concurrentes Port knocking est une technique utile mais demande une approche cliente consciente du protocol Oracle admet que les identités IAM de ses clients ont leaké https://www.theregister.com/2025/04/08/oracle_cloud_compromised/ Oracle a confirmé à certains clients que son cloud public a été compromis, alors que l'entreprise avait précédemment nié toute intrusion. Un pirate informatique a revendiqué avoir piraté deux serveurs d'authentification d'Oracle et volé environ six millions d'enregistrements, incluant des clés de sécurité privées, des identifiants chiffrés et des entrées LDAP. La faille exploitée serait la vulnérabilité CVE-2021-35587 dans Oracle Access Manager, qu'Oracle n'avait pas corrigée sur ses propres systèmes. Le pirate a créé un fichier texte début mars sur login.us2.oraclecloud.com contenant son adresse email pour prouver son accès. Selon Oracle, un ancien serveur contenant des données vieilles de huit ans aurait été compromis, mais un client affirme que des données de connexion aussi récentes que 2024 ont été dérobées. Oracle fait face à un procès au Texas concernant cette violation de données. Cette intrusion est distincte d'une autre attaque contre Oracle Health, sur laquelle l'entreprise refuse de commenter. Oracle pourrait faire face à des sanctions sous le RGPD européen qui exige la notification des parties affectées dans les 72 heures suivant la découverte d'une fuite de données. Le comportement d'Oracle consistant à nier puis à admettre discrètement l'intrusion est inhabituel en 2025 et pourrait mener à d'autres actions en justice collectives. Une GitHub action très populaire compromise https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised Compromission de l'action tj-actions/changed-files : En mars 2025, une action GitHub très utilisée (tj-actions/changed-files) a été compromise. Des versions modifiées de l'action ont exposé des secrets CI/CD dans les logs de build. Méthode d'attaque : Un PAT compromis a permis de rediriger plusieurs tags de version vers un commit contenant du code malveillant. Détails du code malveillant : Le code injecté exécutait une fonction Node.js encodée en base64, qui téléchargeait un script Python. Ce script parcourait la mémoire du runner GitHub à la recherche de secrets (tokens, clés…) et les exposait dans les logs. Dans certains cas, les données étaient aussi envoyées via une requête réseau. Période d'exposition : Les versions compromises étaient actives entre le 12 et le 15 mars 2025. Tout dépôt, particulièrement ceux publiques, ayant utilisé l'action pendant cette période doit être considéré comme potentiellement exposé. Détection : L'activité malveillante a été repérée par l'analyse des comportements inhabituels pendant l'exécution des workflows, comme des connexions réseau inattendues. Réaction : GitHub a supprimé l'action compromise, qui a ensuite été nettoyée. Impact potentiel : Tous les secrets apparaissant dans les logs doivent être considérés comme compromis, même dans les dépôts privés, et régénérés sans délai. Loi, société et organisation Les startup the YCombinateur ont les plus fortes croissances de leur histoire https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html Les entreprises en phase de démarrage à Silicon Valley connaissent une croissance significative grâce à l'intelligence artificielle. Le PDG de Y Combinator, Garry Tan, affirme que l'ensemble des startups de la dernière cohorte a connu une croissance hebdomadaire de 10% pendant neuf mois. L'IA permet aux développeurs d'automatiser des tâches répétitives et de générer du code grâce aux grands modèles de langage. Pour environ 25% des startups actuelles de YC, 95% de leur code a été écrit par l'IA. Cette révolution permet aux entreprises de se développer avec moins de personnel - certaines atteignant 10 millions de dollars de revenus avec moins de 10 employés. La mentalité de “croissance à tout prix” a été remplacée par un renouveau d'intérêt pour la rentabilité. Environ 80% des entreprises présentées lors du “demo day” étaient centrées sur l'IA, avec quelques startups en robotique et semi-conducteurs. Y Combinator investit 500 000 dollars dans les startups en échange d'une participation au capital, suivi d'un programme de trois mois. Red Hat middleware (ex-jboss) rejoint IBM https://markclittle.blogspot.com/2025/03/red-hat-middleware-moving-to-ibm.html Les activités Middleware de Red Hat (incluant JBoss, Quarkus, etc.) vont être transférées vers IBM, dans l'unité dédiée à la sécurité des données, à l'IAM et aux runtimes. Ce changement découle d'une décision stratégique de Red Hat de se concentrer davantage sur le cloud hybride et l'intelligence artificielle. Mark Little explique que ce transfert était devenu inévitable, Red Hat ayant réduit ses investissements dans le Middleware ces dernières années. L'intégration vise à renforcer l'innovation autour de Java en réunissant les efforts de Red Hat et IBM sur ce sujet. Les produits Middleware resteront open source et les clients continueront à bénéficier du support habituel sans changement. Mark Little affirme que des projets comme Quarkus continueront à être soutenus et que cette évolution est bénéfique pour la communauté Java. Un an de commonhaus https://www.commonhaus.org/activity/253.html un an, démarré sur les communautés qu'ils connaissaient bien maintenant 14 projets et put en accepter plus confiance, gouvernance legère et proteger le futur des projets automatisation de l'administratif, stabiilité sans complexité, les developpeurs au centre du processus de décision ils ont besoins de members et supporters (financiers) ils veulent accueillir des projets au delà de ceux du cercles des Java Champions Spring Cloud Data Flow devient un produit commercial et ne sera plus maintenu en open source https://spring.io/blog/2025/04/21/spring-cloud-data-flow-commercial Peut-être sous l'influence de Broadcom, Spring se met à mettre en mode propriétaire des composants du portefeuille Spring ils disent que peu de gens l'utilisaent en mode OSS et la majorité venait d'un usage dans la plateforme Tanzu Maintenir en open source le coutent du temps qu'ils son't pas sur ces projets. La CNCF protège le projet NATS, dans la fondation depuis 2018, vu que la société Synadia qui y contribue souhaitait reprendre le contrôle du projet https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-integrity-of-open-source-cncfs-commitment-to-the-community/ CNCF : Protège projets OS, gouvernance neutre. Synadia vs CNCF : Veut retirer NATS, licence non-OS (BUSL). CNCF : Accuse Synadia de “claw back” (reprise illégitime). Revendications Synadia : Domaine nats.io, orga GitHub. Marque NATS : Synadia n'a pas transféré (promesse rompue malgré aide CNCF). Contestation Synadia : Juge règles CNCF “trop vagues”. Vote interne : Mainteneurs Synadia votent sortie CNCF (sans communauté). Support CNCF : Investissement majeur ($ audits, légal), succès communautaire (>700 orgs). Avenir NATS (CNCF) : Maintien sous Apache 2.0, gouvernance ouverte. Actions CNCF : Health check, appel mainteneurs, annulation marque Synadia, rejet demandes. Mais finalement il semble y avoir un bon dénouement : https://www.cncf.io/announcements/2025/05/01/cncf-and-synadia-align-on-securing-the-future-of-the-nats-io-project/ Accord pour l'avenir de NATS.io : La Cloud Native Computing Foundation (CNCF) et Synadia ont conclu un accord pour sécuriser le futur du projet NATS.io. Transfert des marques NATS : Synadia va céder ses deux enregistrements de marque NATS à la Linux Foundation afin de renforcer la gouvernance ouverte du projet. Maintien au sein de la CNCF : L'infrastructure et les actifs du projet NATS resteront sous l'égide de la CNCF, garantissant ainsi sa stabilité à long terme et son développement en open source sous licence Apache-2.0. Reconnaissance et engagement : La Linux Foundation, par la voix de Todd Moore, reconnaît les contributions de Synadia et son soutien continu. Derek Collison, PDG de Synadia, réaffirme l'engagement de son entreprise envers NATS et la collaboration avec la Linux Foundation et la CNCF. Adoption et soutien communautaire : NATS est largement adopté et considéré comme une infrastructure critique. Il bénéficie d'un fort soutien de la communauté pour sa nature open source et l'implication continue de Synadia. Finalement, Redis revient vers une licence open source OSI, avec la AGPL https://foojay.io/today/redis-is-now-available-under-the-agplv3-open-source-license/ Redis passe à la licence open source AGPLv3 pour contrer l'exploitation par les fournisseurs cloud sans contribution. Le passage précédent à la licence SSPL avait nui à la relation avec la communauté open source. Salvatore Sanfilippo (antirez) est revenu chez Redis. Redis 8 adopte la licence AGPL, intègre les fonctionnalités de Redis Stack (JSON, Time Series, etc.) et introduit les “vector sets” (le support de calcul vectoriel développé par Salvatore). Ces changements visent à renforcer Redis en tant que plateforme appréciée des développeurs, conformément à la vision initiale de Salvatore. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12 juin 2025 : Positive Design Days - Strasbourg (France) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : Devfest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
In this episode, Lois Houston and Nikita Abraham continue their deep dive into Oracle GoldenGate 23ai, focusing on its evolution and the extensive features it offers. They are joined once again by Nick Wagner, who provides valuable insights into the product's journey. Nick talks about the various iterations of Oracle GoldenGate, highlighting the significant advancements from version 12c to the latest 23ai release. The discussion then shifts to the extensive new features in 23ai, including AI-related capabilities, UI enhancements, and database function integration. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we introduced Oracle GoldenGate and its capabilities, and also spoke about GoldenGate 23ai. In today's episode, we'll talk about the various iterations of Oracle GoldenGate since its inception. And we'll also take a look at some new features and the Oracle GoldenGate product family. 00:57 Lois: And we have Nick Wagner back with us. Nick is a Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I think the last time we had an Oracle University course was when Oracle GoldenGate 12c was out. I'm sure there's been a lot of advancements since then. Can you walk us through those? Nick: GoldenGate 12.3 introduced the microservices architecture. GoldenGate 18c introduced support for Oracle Autonomous Data Warehouse and Autonomous Transaction Processing Databases. In GoldenGate 19c, we added the ability to do cross endian remote capture for Oracle, making it easier to set up the GoldenGate OCI service to capture from environments like Solaris, Spark, and HP-UX and replicate into the Cloud. Also, GoldenGate 19c introduced a simpler process for upgrades and installation of GoldenGate where we released something called a unified build. This means that when you install GoldenGate for a particular database, you don't need to worry about the database version when you install GoldenGate. Prior to this, you would have to install a version-specific and database-specific version of GoldenGate. So this really simplified that whole process. In GoldenGate 23ai, which is where we are now, this really is a huge release. 02:16 Nikita: Yeah, we covered some of the distributed AI features and high availability environments in our last episode. But can you give us an overview of everything that's in the 23ai release? I know there's a lot to get into but maybe you could highlight just the major ones? Nick: Within the AI and streaming environments, we've got interoperability for database vector types, heterogeneous capture and apply as well. Again, this is not just replication between Oracle-to-Oracle vector or Postgres to Postgres vector, it is heterogeneous just like the rest of GoldenGate. The entire UI has been redesigned and optimized for high speed. And so we have a lot of customers that have dozens and dozens of extracts and replicats and processes running and it was taking a long time for the UI to refresh those and to show what's going on within those systems. So the UI has been optimized to be able to handle those environments much better. We now have the ability to call database functions directly from call map. And so when you do transformation with GoldenGate, we have about 50 or 60 built-in transformation routines for string conversion, arithmetic operation, date manipulation. But we never had the ability to directly call a database function. 03:28 Lois: And now we do? Nick: So now you can actually call that database function, database stored procedure, database package, return a value and that can be used for transformation within GoldenGate. We have integration with identity providers, being able to use token-based authentication and integrate in with things like Azure Active Directory and your other single sign-on for the GoldenGate product itself. Within Oracle 23ai, there's a number of new features. One of those cool features is something called lock-free reservation columns. So this allows you to have a row, a single row within a table and you can identify a column within that row that's like an inventory column. And you can have multiple different users and multiple different transactions all updating that column within that same exact row at that same time. So you no longer have row-level locking for these reservation columns. And it allows you to do things like shopping carts very easily. If I have 500 widgets to sell, I'm going to let any number of transactions come in and subtract from that inventory column. And then once it gets below a certain point, then I'll start enforcing that row-level locking. 04:43 Lois: That's really cool… Nick: The one key thing that I wanted to mention here is that because of the way that the lock-free reservations work, you can have multiple transactions open on the same row. This is only supported for Oracle to Oracle. You need to have that same lock-free reservation data type and availability on that target system if GoldenGate is going to replicate into it. 05:05 Nikita: Are there any new features related to the diagnosability and observability of GoldenGate? Nick: We've improved the AWR reports in Oracle 23ai. There's now seven sections that are specific to Oracle GoldenGate to allow you to really go in and see exactly what the GoldenGate processes are doing and how they're behaving inside the database itself. And there's a Replication Performance Advisor package inside that database, and that's been integrated into the Web UI as well. So now you can actually get information out of the replication advisor package in Oracle directly from the UI without having to log into the database and try to run any database procedures to get it. We've also added the ability to support a per-PDB Extract. So in the past, when GoldenGate would run on a multitenant database, a multitenant database in Oracle, all the redo data from any pluggable database gets sent to that one redo stream. And so you would have to configure GoldenGate at the container or root level and it would be able to access anything at any PDB. Now, there's better security and better performance by doing what we call per-PDB Extract. And this means that for a single pluggable database, I can have an extract that runs at that database level that's going to capture information just from that pluggable database. 06:22 Lois And what about non-Oracle environments, Nick? Nick: We've also enhanced the non-Oracle environments as well. For example, in Postgres, we've added support for precise instantiation using Postgres snapshots. This eliminates the need to handle collisions when you're doing Postgres to Postgres replication and initial instantiation. On the GoldenGate for big data side, we've renamed that product more aptly to distributed applications in analytics, which is really what it does, and we've added a whole bunch of new features here too. The ability to move data into Databricks, doing Google Pub/Sub delivery. We now have support for XAG within the GoldenGate for distributed applications and analytics. What that means is that now you can follow all of our MAA best practices for GoldenGate for Oracle, but it also works for the DAA product as well, meaning that if it's running on one node of a cluster and that node fails, it'll restart itself on another node in the cluster. We've also added the ability to deliver data to Redis, Google BigQuery, stage and merge functionality for better performance into the BigQuery product. And then we've added a completely new feature, and this is something called streaming data and apps and we're calling it AsyncAPI and CloudEvent data streaming. It's a long name, but what that means is that we now have the ability to publish changes from a GoldenGate trail file out to end users. And so this allows through the Web UI or through the REST API, you can now come into GoldenGate and through the distributed applications and analytics product, actually set up a subscription to a GoldenGate trail file. And so this allows us to push data into messaging environments, or you can simply subscribe to changes and it doesn't have to be the whole trail file, it can just be a subset. You can specify exactly which tables and you can put filters on that. You can also set up your topologies as well. So, it's a really cool feature that we've added here. 08:26 Nikita: Ok, you've given us a lot of updates about what GoldenGate can support. But can we also get some specifics? Nick: So as far as what we have, on the Oracle Database side, there's a ton of different Oracle databases we support, including the Autonomous Databases and all the different flavors of them, your Oracle Database Appliance, your Base Database Service within OCI, your of course, Standard and Enterprise Edition, as well as all the different flavors of Exadata, are all supported with GoldenGate. This is all for capture and delivery. And this is all versions as well. GoldenGate supports Oracle 23ai and below. We also have a ton of non-Oracle databases in different Cloud stores. On an non-Oracle side, we support everything from application-specific databases like FairCom DB, all the way to more advanced applications like Snowflake, which there's a vast user base for that. We also support a lot of different cloud stores and these again, are non-Oracle, nonrelational systems, or they can be relational databases. We also support a lot of big data platforms and this is part of the distributed applications and analytics side of things where you have the ability to replicate to different Apache environments, different Cloudera environments. We also support a number of open-source systems, including things like Apache Cassandra, MySQL Community Edition, a lot of different Postgres open source databases along with MariaDB. And then we have a bunch of streaming event products, NoSQL data stores, and even Oracle applications that we support. So there's absolutely a ton of different environments that GoldenGate supports. There are additional Oracle databases that we support and this includes the Oracle Metadata Service, as well as Oracle MySQL, including MySQL HeatWave. Oracle also has Oracle NoSQL Spatial and Graph and times 10 products, which again are all supported by GoldenGate. 10:23 Lois: Wow, that's a lot of information! Nick: One of the things that we didn't really cover was the different SaaS applications, which we've got like Cerner, Fusion Cloud, Hospitality, Retail, MICROS, Oracle Transportation, JD Edwards, Siebel, and on and on and on. And again, because of the nature of GoldenGate, it's heterogeneous. Any source can talk to any target. And so it doesn't have to be, oh, I'm pulling from Oracle Fusion Cloud, that means I have to go to an Oracle Database on the target, not necessarily. 10:51 Lois: So, there's really a massive amount of flexibility built into the system. 11:00 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 11:26 Nikita: Welcome back! Now that we've gone through the base product, what other features or products are in the GoldenGate family itself, Nick? Nick: So we have quite a few. We've kind of touched already on GoldenGate for Oracle databases and non-Oracle databases. We also have something called GoldenGate for Mainframe, which right now is covered under the GoldenGate for non-Oracle, but there is a licensing difference there. So that's something to be aware of. We also have the OCI GoldenGate product. We are announcing and we have announced that OCI GoldenGate will also be made available as part of the Oracle Database@Azure and Oracle Database@ Google Cloud partnerships. And then you'll be able to use that vendor's cloud credits to actually pay for the OCI GoldenGate product. One of the cool things about this is it will have full feature parity with OCI GoldenGate running in OCI. So all the same features, all the same sources and targets, all the same topologies be able to migrate data in and out of those clouds at will, just like you do with OCI GoldenGate today running in OCI. We have Oracle GoldenGate Free. This is a completely free edition of GoldenGate to use. It is limited on the number of platforms that it supports as far as sources and targets and the size of the database. 12:45 Lois: But it's a great way for developers to really experience GoldenGate without worrying about a license, right? What's next, Nick? Nick: We have GoldenGate for Distributed Applications and Analytics, which was formerly called GoldenGate for big data, and that allows us to do all the streaming. That's also where the GoldenGate AsyncAPI integration is done. So in order to publish the GoldenGate trail files or allow people to subscribe to them, it would be covered under the Oracle GoldenGate Distributed Applications and Analytics license. We also have OCI GoldenGate Marketplace, which allows you to run essentially the on-premises version of GoldenGate but within OCI. So a little bit more flexibility there. It also has a hub architecture. So if you need that 99.99% availability, you can get it within the OCI Marketplace environment. We have GoldenGate for Oracle Enterprise Manager Cloud Control, which used to be called Oracle Enterprise Manager. And this allows you to use Enterprise Manager Cloud Control to get all the statistics and details about GoldenGate. So all the reporting information, all the analytics, all the statistics, how fast GoldenGate is replicating, what's the lag, what's the performance of each of the processes, how much data am I sending across a network. All that's available within the plug-in. We also have Oracle GoldenGate Veridata. This is a nice utility and tool that allows you to compare two databases, whether or not GoldenGate is running between them and actually tell you, hey, these two systems are out of sync. And if they are out of sync, it actually allows you to repair the data too. 14:25 Nikita: That's really valuable…. Nick: And it does this comparison without locking the source or the target tables. The other really cool thing about Veridata is it does this while there's data in flight. So let's say that the GoldenGate lag is 15 or 20 seconds and I want to compare this table that has 10 million rows in it. The Veridata product will go out, run its comparison once. Once that comparison is done the first time, it's then going to have a list of rows that are potentially out of sync. Well, some of those rows could have been moved over or could have been modified during that 10 to 15 second window. And so the next time you run Veridata, it's actually going to go through. It's going to check just those rows that were potentially out of sync to see if they're really out of sync or not. And if it comes back and says, hey, out of those potential rows, there's two out of sync, it'll actually produce a script that allows you to resynchronize those systems and repair them. So it's a very cool product. 15:19 Nikita: What about GoldenGate Stream Analytics? I know you mentioned it in the last episode, but in the context of this discussion, can you tell us a little more about it? Nick: This is the ability to essentially stream data from a GoldenGate trail file, and they do a real time analytics on it. And also things like geofencing or real-time series analysis of it. 15:40 Lois: Could you give us an example of this? Nick: If I'm working in tracking stock market information and stocks, it's not really that important on how much or how far down a stock goes. What's really important is how quickly did that stock rise or how quickly did that stock fall. And that's something that GoldenGate Stream Analytics product can do. Another thing that it's very valuable for is the geofencing. I can have an application on my phone and I can track where the user is based on that application and all that information goes into a database. I can then use the geofencing tool to say that, hey, if one of those users on that app gets within a certain distance of one of my brick-and-mortar stores, I can actually send them a push notification to say, hey, come on in and you can order your favorite drink just by clicking Yes, and we'll have it ready for you. And so there's a lot of things that you can do there to help upsell your customers and to get more revenue just through GoldenGate itself. And then we also have a GoldenGate Migration Utility, which allows customers to migrate from the classic architecture into the microservices architecture. 16:44 Nikita: Thanks Nick for that comprehensive overview. Lois: In our next episode, we'll have Nick back with us to talk about commonly used terminology and the GoldenGate architecture. And if you want to learn more about what we discussed today, visit mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In questo episodio, Edo esplora un attacco hacker musicale legato alla canzone 'Rhythm Nation' di Janet Jackson, che potrebbe danneggiare gli hard disk. Viene anche discussa una curiosa situazione legale in cui un avvocato è stato redarguito per aver inviato documenti con watermark inappropriati. Inoltre, si analizza la questione del monopolio di Google e il suo impatto sui browser, con particolare attenzione a come Google finanzia Safari e Firefox. Infine, si annuncia il ritorno di Redis come prodotto open source con la nuova versione 8.0.
Jonathan reviews the OrangePI RV2, Windows runs Arch btw, and Nvidia is deprecating CUDA for some old video cards. PewDiePie made a Linux video, Proton 10 enters Beta, and OSU's Open Source Labs has a funding crunch. For command line tips, Ken starts a series on the pw-cli, Jeff has some ricing tips with eww, and Jonathan talks about Open Source character recognition with ocrmypdf and pdftotext. You can find the show notes at https://bit.ly/3GxPRbY and enjoy! Host: Jonathan Bennett Co-Hosts: Ken McDonald and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
video: https://youtu.be/PtP_jOlAIHE Comment on the TWIL Forum (https://thisweekinlinux.com/forum) This week in Linux, we have a lot of cool stuff to talk about. First, we're going to talk about the future of KDE Plasma. Then we're going to go into the future of OpenSUSE because Leap 16 beta has been released and the final version will be coming out this year. Then we'll also have a new version of Mozilla Firefox. And also we have some interesting news from the Oregon State University because there's some potential risk of closure of their Open Source Lab, which would be a shame. And then we're also going to talk about Redis because they're back in the news this week because they want to redo with Open Source. All of this and more on This Week in Linux, the weekly news show that keeps you up to date with what's going on in the Linux and Open Source world. Now let's jump right into Your Source for Linux GNews. Download as MP3 (https://aphid.fireside.fm/d/1437767933/2389be04-5c79-485e-b1ca-3a5b2cebb006/7cb90299-be53-43c6-90d5-019ad4489590.mp3) Support the Show Become a Patron = tuxdigital.com/membership (https://tuxdigital.com/membership) Store = tuxdigital.com/store (https://tuxdigital.com/store) Chapters: 00:00 Intro 00:52 Future of KDE Plasma: LTS, Telementry, & more 07:45 openSUSE Leap 16 Beta Released 13:49 Trinity Desktop R14.1.4 Released 17:17 Sandfly Security, agentless Linux security 19:15 Mozilla Firefox 138 Released 27:09 OSU Open Source Lab At Risk Of Closure 31:46 Redis wants a Redo with Open Source 37:28 Red Hat Summit 2025 39:47 Support the show Links: Future of KDE Plasma: LTS, Telementry, & more https://pointieststick.com/2025/05/01/notes-from-the-graz-plasma-sprint/ (https://pointieststick.com/2025/05/01/notes-from-the-graz-plasma-sprint/) openSUSE Leap 16 Beta Released https://news.opensuse.org/2025/04/30/leap-16-enters-beta/ (https://news.opensuse.org/2025/04/30/leap-16-enters-beta/) https://news.opensuse.org/2025/05/02/tw-monthly-update-april/ (https://news.opensuse.org/2025/05/02/tw-monthly-update-april/) Trinity Desktop R14.1.4 Released https://www.trinitydesktop.org/ (https://www.trinitydesktop.org/) https://www.trinitydesktop.org/newsentry.php?entry=2025.04.27 (https://www.trinitydesktop.org/newsentry.php?entry=2025.04.27) Sandfly Security, agentless Linux security https://thisweekinlinux.com/sandfly (https://thisweekinlinux.com/sandfly) https://destinationlinux.net/409 (https://destinationlinux.net/409) Mozilla Firefox 138 Released https://www.mozilla.org/en-US/firefox/138.0/releasenotes/ (https://www.mozilla.org/en-US/firefox/138.0/releasenotes/) https://blog.mozilla.org/en/firefox/tab-groups-community/ (https://blog.mozilla.org/en/firefox/tab-groups-community/) https://www.howtogeek.com/235670/organize-manage-your-firefox-tabs-like-a-pro-with-the-tab-groups-add-on/ (https://www.howtogeek.com/235670/organize-manage-your-firefox-tabs-like-a-pro-with-the-tab-groups-add-on/) OSU Open Source Lab At Risk Of Closure https://osuosl.org/blog/osl-future/ (https://osuosl.org/blog/osl-future/) Redis wants a Redo with Open Source https://redis.io/blog/agplv3/ (https://redis.io/blog/agplv3/) https://antirez.com/news/151 (https://antirez.com/news/151) https://youtu.be/r67MRruNhow (https://youtu.be/r67MRruNhow) Red Hat Summit 2025 https://www.redhat.com/en/summit (https://www.redhat.com/en/summit) https://www.redhat.com/en/blog/red-hat-summit-ansiblefest-2025-ansible-sessions-you-dont-want-miss (https://www.redhat.com/en/blog/red-hat-summit-ansiblefest-2025-ansible-sessions-you-dont-want-miss) Support the show https://tuxdigital.com/membership (https://tuxdigital.com/membership) https://store.tuxdigital.com/ (https://store.tuxdigital.com/)
Zastanawiasz się jak zbudować własną Architekturę on-premises PaaS bez uzależnienia od chmury? W tym odcinku Patoarchitekci analizują open-source'owe alternatywy dla usług chmurowych. Łukasz i Szymon omawiają Kubernetesa, Ranchera i inne kluczowe komponenty własnej platformy. Prowadzący szczegółowo rozkładają na czynniki pierwsze budowę platformy PaaS. Od operatorów baz danych i cache'u Redis, przez storage obiektowy Minio, po monitoring z Grafaną. Dowiesz się, kiedy ma sens przenoszenie workloadów z chmury na on-prem i jak uniknąć typowych pułapek przy budowie własnej infrastruktury. Chcesz odzyskać kontrolę nad swoją infrastrukturą i kosztami? Posłuchaj tego odcinka i przekonaj się, czy budowa własnego PaaS-a to dobry pomysł dla Twojej organizacji. Pamiętaj tylko, że MVP platformy to dopiero początek – prawdziwe wyzwania zaczynają się przy jej utrzymaniu! A teraz nie ma co się obijać!
Today's guest is Salvatore Sanfilippo, also known as Antirez!Salvatore is the creator of Redis, an open source data store used by hundreds of thousands of developers across the world. And he's also a writer. He published the popular sci-fi novel called Wohpe, which anticipated a lot of what is happening today with AI.With Salvatore, we talked about open source — what makes a project successful and how to make it successful for its maintainers. Then we discussed how Salvatore uses AI for extremely complex coding, which makes him 5x faster than without AI. And finally, I asked him about his writing work and what is the experience of writing a novel versus a software project.02:55 Introduction06:47 Salvatore's life priorities12:40 Develop an open source project in 2025 18:14 Being a generalist with AI support19:25 Making an open source project sustainable23:54 Complexity of modern front-end development24:58 Big Tech, complexity and open source30:19 Escaping the complexity loop33:40 The rebirth of fullstack through AI34:53 How Salvatore creates code with AI41:16 Does AI increase code productivity?44:14 The importance of the right prompt47:06 Writing tests in the AI era of development50:49 Whope, Salvatore's sci-fi novel53:28 Writing a novel and writing code—This episode is brought to you by https://workos.com—Some show notes:• Salvatore's website: https://antirez.com/• Salvatore's Youtube: / @antirez • Wohpe: / wohpe —You can also find this at:
This week, we discuss the rise of MCP, Google's Agent2Agent protocol, and 20 years of Git. Plus, lazy ways to get rid of your junk. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/o2bmkzXOzHE?si=bPrbuPlKYODQj88s) 514 (https://www.youtube.com/live/o2bmkzXOzHE?si=bPrbuPlKYODQj88s) Runner-up Titles They like to keep it tight, but I'll distract them Bring some SDT energy Salesforce is where AI goes to struggle I like words Rundown MCP The Strategy Behind MCP (https://fintanr.com/links/2025/03/31/mcp-strategy.html?utm_source=substack&utm_medium=email) Google's Agent2Agent Protocol Helps AI Agents Talk to Each Other (https://thenewstack.io/googles-agent2agent-protocol-helps-ai-agents-talk-to-each-other/) Announcing the Agent2Agent Protocol (A2A)- Google Developers Blog (https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/) MCP: What It Is and Why It Matters (https://addyo.substack.com/p/mcp-what-it-is-and-why-it-matters) 20 years of Git. Still weird, still wonderful. (https://blog.gitbutler.com/20-years-of-git/) A love letter to the CSV format (https://github.com/medialab/xan/blob/master/docs/LOVE_LETTER.md?ref=labnotes.org) Relevant to your Interests JFrog Survey Surfaces Limited DevSecOps Gains - DevOps.com (https://substack.com/redirect/dc38a19b-484e-47bc-83ec-f0413af42718?j=eyJ1IjoiMmw5In0.XyGUvWHNbIDkkVfjKDkxiDWJVFXc4dKUhxHaMrlgmdI) Raspberry Pi's sliced profits are easier to swallow than its valuation (https://on.ft.com/42d3mol) 'I begin spying for Deel': (https://www.yahoo.com/news/begin-spying-deel-rippling-employee-151407449.html) Bill Gates Publishes Original Microsoft Source Code in a Blog Post (https://www.cnet.com/tech/computing/bill-gates-publishes-original-microsoft-source-code-in-a-blog-post/) WordPress.com owner Automattic is laying off 16 percent of workers (https://www.theverge.com/news/642187/automattic-wordpress-layoffs-matt-mullenweg) Intel, TSMC recently discussed chipmaking joint venture (https://www.reuters.com/technology/intel-tsmc-tentatively-agree-form-chipmaking-joint-venture-information-reports-2025-04-03/) TikTok deal scuttled because of Trump's tariffs on China (https://www.nbcnews.com/politics/politics-news/trump-tiktok-ban-extension-rcna199394) NVIDIA Finally Adds Native Python Support to CUDA (https://thenewstack.io/nvidia-finally-adds-native-python-support-to-cuda/) Cloudflare Acquires Outerbase (https://www.cloudflare.com/press-releases/2025/cloudflare-acquires-outerbase-to-expand-developer-experience/) UK loses bid to keep Apple appeal against demand for iPhone 'backdoor' a secret (https://www.cnbc.com/2025/04/07/uk-loses-bid-to-keep-apple-appeal-against-iphone-backdoor-a-secret.html) Cloud Asteroids | Wiz (https://www.wiz.io/asteroids) Unpacking Google Cloud Platform's Acquisition Of Wiz (https://moorinsightsstrategy.com/unpacking-google-cloud-platforms-acquisition-of-wiz/) Trade, Tariffs, and Tech (https://stratechery.com/2025/trade-tariffs-and-tech/?access_token=eyJhbGciOiJSUzI1NiIsImtpZCI6InN0cmF0ZWNoZXJ5LnBhc3Nwb3J0Lm9ubGluZSIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJzdHJhdGVjaGVyeS5wYXNzcG9ydC5vbmxpbmUiLCJhenAiOiJIS0xjUzREd1Nod1AyWURLYmZQV00xIiwiZW50Ijp7InVyaSI6WyJodHRwczovL3N0cmF0ZWNoZXJ5LmNvbS8yMDI1L3RyYWRlLXRhcmlmZnMtYW5kLXRlY2gvIl19LCJleHAiOjE3NDY2MjA4MTAsImlhdCI6MTc0NDAyODgxMCwiaXNzIjoiaHR0cHM6Ly9hcHAucGFzc3BvcnQub25saW5lL29hdXRoIiwic2NvcGUiOiJmZWVkOnJlYWQgYXJ0aWNsZTpyZWFkIGFzc2V0OnJlYWQgY2F0ZWdvcnk6cmVhZCBlbnRpdGxlbWVudHMiLCJzdWIiOiJDS1RtckdldHdmM1lYa3FCYkpKaUgiLCJ1c2UiOiJhY2Nlc3MifQ.pVeppxFZcYy960AbHM--oz5gzQdMEa_mv3ZPrqrZmbw9PhwL3iCEQ7_PtfPEKgInTfvSGWofXW0ZjAN-G_Eug5BlvwlF8T6HhXOCNJlwJJeqkWKvNdjvVz0t6bc5fOjn4Tbt_JobtrwxIEe-4-L7QRMhzFj9ajiiRqU6KNi3qYxWScg3XWfYmuhRdItQsgWINcSyW9iLaTkDLga_m95MMBNAat-CXDhEeKKCrAApZBM_RoNFaQ3s679vslz2IbJuCIAN1jVvZYR2Vg18lDbwubPiddDQAOkjs77PZRX_tCnMSwVXtOq0S1cCn4GZIw1qPY8j0qWWmkUck_izqPAveg) Google Workspace gets automation flows, podcast-style summaries (https://techcrunch.com/2025/04/09/google-workspace-gets-automation-flows-podcast-style-summaries/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAAAm5axmZnaAYjPgnDoqozIFkZHFPG8FHWa9y8pWwoQMN-oJ8MvJjY0IOg7Ej35bBB1Y2Ej192X3dHr5Q8PZ4i8WP_VNeXKj4f1n-KXFgqrpjfjUbiUvE4eGIl1j1VPWIg62ApISVGhYQ-__bXdIteBex8_k5-wxcpSYtfmlAFxsk) Zelle is shutting down its app. Here's how you can still use the service (https://www.cnn.com/2025/04/03/business/zelle-cash-transferring-app-shuts-down/index.html) One year ago Redis changed its license – and lost most of its external contributors (https://devclass.com/2025/04/01/one-year-ago-redis-changed-its-license-and-lost-most-of-its-external-contributors/?ck_subscriber_id=512840665&utm_source=convertkit&utm_medium=email&utm_campaign=[Last%20Week%20in%20AWS]%20Issue%20#417:%20Way%20of%20the%20Weasel,%20RDS%20and%20SageMaker%20Edition%20-%2017192200) Tailscale raises $160 Million (USD) Series C to build the New Internet (https://tailscale.com/blog/series-c) Nonsense NFL announces use of virtual measurement technology for first downs (https://www.nytimes.com/athletic/6247338/2025/04/01/nfl-announces-virtual-first-down-measurement-technology/?source=athletic_scoopcity_newsletter&campaign=13031970&userId=56655) Listener Feedback GitJobs (https://gitjobs.dev/) Freecycle (https://www.freecycle.org) Conferences Tanzu Annual Update AI PARTY! (https://go-vmware.broadcom.com/april-moment-2025?utm_source=cote&utm_campaign=devrel&utm_medium=newsletter), April 16th, Coté speaking DevOps Days Atlanta (https://devopsdays.org/events/2025-atlanta/welcome/), April 29th-30th Cloud Foundry Day US (https://events.linuxfoundation.org/cloud-foundry-day-north-america/), May 14th, Palo Alto, CA, Coté speaking Fr (https://vmwarereg.fig-street.com/051325-tanzu-workshop/)ee AI workshop (https://vmwarereg.fig-street.com/051325-tanzu-workshop/), May 13th. day before C (https://events.linuxfoundation.org/cloud-foundry-day-north-america/)loud (https://events.linuxfoundation.org/cloud-foundry-day-north-america/) (https://events.linuxfoundation.org/cloud-foundry-day-north-america/)Foundry (https://events.linuxfoundation.org/cloud-foundry-day-north-america/) Day (https://events.linuxfoundation.org/cloud-foundry-day-north-america/) NDC Oslo (https://ndcoslo.com/), May 21st-23th, Coté speaking SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: KONNWEI KW208 12V Car Battery Tester (https://www.amazon.com/dp/B08MPXGSGN?ref=ppx_yo2ov_dt_b_fed_asin_title) Matt: Search Engine: The Memecoin Casino (https://www.searchengine.show/planet-money-the-memecoin-casino/) Coté: :Knipex Cobra High-Tech Water Pump Pliers (https://www.amazon.com/atramentized-125-self-service-87-01/dp/B098D1HNGY/) Photo Credits Header (https://unsplash.com/photos/a-bicycle-parked-on-the-side-of-a-road-next-to-a-traffic-sign-wPv1QV_i8ek)
Nesse episódio trouxemos as notícias e novidades do mundo da programação que nos chamaram atenção dos dias 29/03 a 04/04.
Nesse episódio trouxemos as notícias e novidades do mundo da programação que nos chamaram atenção dos dias 29/03 a 04/04.
Arcade, an AI agent infrastructure startup founded by former Okta exec Alex Salazar and former Redis engineer Sam Partee, has raised $12 million from Laude Ventures. Laude is the new fund launched in 2024 by Perplexity co-founder Andy Konwinski, the UC Berkeley computer scientist who also co-founded Databricks. This isn't the only check Laude has cut. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Plant-Based Meats and Puberty, Obesity, and Fracture Risk What are the effects of plant-based meats on premature puberty, childhood obesity, and hip fracture risk? Listen to today's episode to find out more. Written by Dr. Michael Greger at @NutritionFacts.org #vegan #plantbased #plantbasedbriefing #plantbasedmeat #veganmeat #beyondmeat #impossiblemeat #obesity #hipfracture #puberty ============================ Original post: https://nutritionfacts.org/blog/plant-based-meats-and-puberty-obesity-and-fracture-risk/ Related Episodes: 850: Beyond Meat 4.0 Is Healthier and Even Tastier https://plantbasedbriefing.libsyn.com/850-beyond-meat-40-is-healthier-and-even-tastier-by-jane-velez-mitchell-at-unchainedtvcom 787: Red Is the New Green: Why Plant-Based Brands Need to Change Color https://plantbasedbriefing.libsyn.com/787-red-is-the-new-green-why-plant-based-brands-need-to-change-color-by-charlotte-pointing-at-vegnewscom 616: There's Nothing Natural About Modern Meat https://plantbasedbriefing.libsyn.com/616-theres-nothing-natural-about-modern-meat-by-jessica-scott-reid-at-sentientmediaorg 379: The Significant, Unalloyed Goodness of Replacing Animal Agriculture https://plantbasedbriefing.libsyn.com/379-the-significant-unalloyed-goodness-of-replacing-animal-agriculture-by-dr-karthik-sekar-at-aftermeatbookcom 183: Are Beyond Meat and the Impossible Burger Healthy? https://plantbasedbriefing.libsyn.com/183-are-beyond-meat-and-the-impossible-burger-healthy-by-dr-michael-greger-at-nutritionfactsorg ============================ Dr. Michael Greger is a physician, New York Times bestselling author, and internationally recognized speaker on nutrition, food safety, and public health issues. A founding member and Fellow of the American College of Lifestyle Medicine, Dr. Greger is licensed as a general practitioner specializing in clinical nutrition. He is a graduate of the Cornell University School of Agriculture and Tufts University School of Medicine. He founded NUTRITIONFACTS.ORG is a non-profit, non-commercial, science-based public service provided by Dr. Michael Greger, providing free updates on the latest in nutrition research via bite-sized videos. There are more than a thousand videos on nearly every aspect of healthy eating, with new videos and articles uploaded every day. His latest books —How Not to Age, How Not to Die, the How Not to Die Cookbook, and How Not to Diet — became instant New York Times Best Sellers. His two latest books, How to Survive a Pandemic and the How Not to Diet Cookbook were released in 2020. 100% of all proceeds he has ever received from his books, DVDs, and speaking engagements have always and will always be donated to charity. ============================== FOLLOW THE SHOW ON: YouTube: https://www.youtube.com/@plantbasedbriefing Spotify: https://open.spotify.com/show/2GONW0q2EDJMzqhuwuxdCF?si=2a20c247461d4ad7 Apple Podcasts: https://podcasts.apple.com/us/podcast/plant-based-briefing/id1562925866 Your podcast app of choice: https://pod.link/1562925866 Facebook: https://www.facebook.com/PlantBasedBriefing LinkedIn: https://www.linkedin.com/company/plant-based-briefing/ Instagram: https://www.instagram.com/plantbasedbriefing/
In this Tactics for Tech Leadership podcast episode, Andy and Mon-Chaio explore SWIFT (Structured What If Technique). While traditionally seen as a technical tool for failure analysis, the hosts consider its potential applications in leadership and organizational contexts. Listeners will learn how SWIFT can help anticipate system failures even before they occur, from technical systems like Redis caches to social-technical systems like performance reviews and hiring processes. By the end, you'll understand how to adapt this structured method for diagnosing issues and improving both technical and organizational systems.Transcript: https://thettlpodcast.com/2025/03/18/s3e10-swiftly-understanding-failure-modes/ReferencesSWIFT - https://www.asems.mod.uk/toolkit/swift
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this episode, Kaivalya Apte and Frederic Branczyk talk about observability, focusing on continuous profiling and the role of eBPF. They discuss the evolution of profiling techniques, the importance of systematic data collection, and the challenges faced in maintaining low overhead while gathering detailed performance metrics.Frederic shares insights from his extensive experience with Prometheus and Kubernetes, emphasizing the transformative impact of continuous profiling on software performance optimization. This conversation delves into the intricacies of eBPF (Extended Berkeley Packet Filter) and its applications in profiling and performance analysis. The discussion covers the capabilities of eBPF in extending the kernel safely, the mechanisms of user space profiling, and the handling of process terminations. It also explores memory and network profiling techniques, the challenges of profiling in different programming environments, and the limitations of eBPF in certain use cases. The conversation concludes with valuable resources for those interested in learning more about eBPF and profiling techniques.Chapters:00:00 Introduction to Observability and Profiling01:17 Frederic's Background and Expertise02:11 The Importance of Continuous Profiling06:46 The Value of Continuous Profiling11:20 Understanding Profiling Data19:09 Data Structures and Performance in Profiling32:35 The Role of eBPF in Profiling42:48 Introduction to eBPF and Its Capabilities48:32 User Space Profiling and Memory Management51:39 Handling Process Termination and Agent Recovery55:27 Memory and Network Profiling Techniques01:01:33 Profiling in Different Programming Environments01:11:47 Use Cases and Limitations of eBPF in Profiling01:13:54 Resources for Learning eBPF and Profiling Techniques------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------------------------------------------------------------------------------------------------------------------------------------------------------------Link to other playlists. LIKE, SHARE and SUBSCRIBE------------------------------------------------------------------------------------------------------------------------------------------------------------------Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this episode, we talk to Søren Schmidt, Co-Founder and CEO of Prisma, discussing the evolution of Prisma from a backend as a service to a popular ORM and now to Prisma Postgres. He shares insights into the challenges faced during this journey, the importance of user feedback, and the innovative architecture of Prisma Postgres, which leverages micro VMs for performance optimization. The conversation also touches on the complexities of managing data centers and the strategies employed to ensure a seamless user experience. In this conversation, Søren Schmidt discusses the details about Postgres snapshots, their impact on performance, and the mechanisms for fault tolerance. He explains how Pulse change data capture works and how Prisma Postgres simplifies database management for users. Chapters00:00 Introduction to Prisma and Its Evolution03:00 The Journey from ORM to Prisma Postgres06:00 Simplifying Database Management09:01 Understanding Prisma Postgres Architecture12:12 The Role of Accelerate in Query Routing14:51 Optimizing Query Processing with Micro VMs18:12 Maintaining Postgres Integrity in a Micro VM Environment21:07 User Experience and Community Feedback23:57 Challenges of Data Center Management27:09 Cold Starts and Performance Optimization34:30 Understanding Snapshots in Postgres38:55 Snapshot Mechanisms and Fault Tolerance44:09 Change Data Capture with Pulse55:07 Transitioning to Prisma Postgres58:45 Community and Getting Started with Prisma PostgresSome blogs worth checking out:https://www.prisma.io/blog/prisma-postgres-the-future-of-serverless-databaseshttps://www.prisma.io/blog/cloudflare-unikernels-and-bare-metal-life-of-a-prisma-postgres-queryhttps://www.prisma.io/blog/announcing-prisma-postgres-early-accessPrisma Postgres relies heavily on the Unikraft project. There is a good introductory talk here: https://www.youtube.com/watch?v=n4wOyAuNhl0And some very technical papers here: https://unikraft.org/community/papersThe best way to get started with Prisma Postgres is to go straight to https://www.prisma.io/ ------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this conversation, Alex from Red Panda discusses his engineering background, the challenges faced in reliability engineering, and the journey of building a better streaming system. He emphasizes the importance of understanding latency and performance in engineering systems, the market position of Red Panda in relation to Kafka, and the complexities involved in optimizing codebases for better performance. In this conversation, Alex discusses Red Panda's architecture, focusing on its thread architecture, memory allocation mechanics, and the importance of protocol correctness. He highlights how Red Panda stands out in the data systems landscape by eliminating unnecessary complexities and optimizing performance across various latency spectrums. The discussion also touches on the future of data processing, emphasizing the shift towards agentic workloads and the integration of analytical and operational layers.Chapters00:00 Introduction11:07 Building a Better Streaming System19:10 Market Position and Competition25:06 Optimizing Latency and Performance32:38 Understanding Complexity in Codebases33:36 Thread Architecture and Concurrency Models39:39 Memory Allocation Mechanics47:31 Protocol Correctness and Optimization Strategies56:27 Red Panda's Unique Position in Data Systems01:02:05 The Future of Data Processing and Agentic WorkloadsBlogs:TPC buffers: https://www.redpanda.com/blog/tpc-buffershttps://www.redpanda.com/blog/always-on-production-memory-profiling-seastarhttps://www.redpanda.com/blog/end-to-end-data-pipelines-types-benefits-and-process------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------------------------------------------------------------------------------------------------------------------------------------------------------------Link to other playlists. LIKE, SHARE and SUBSCRIBE------------------------------------------------------------------------------------------------------------------------------------------------------------------If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet.Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!#streaming #kafka #redpanda #c++ #databasesystems #SQL #distributedsystems #memoryallocation #garbagecollection
Guy Royse, dev advocate at Redis, discusses going beyond the cache with Redis and Node.js. He explores its capabilities as a memory-first database, session management, and even fun use cases like the Bigfoot Tracker API. He also shares insights on Redis OM for object mapping and its future in the JavaScript ecosystem. Links http://guyroyse.com http://github.com/guyroyse https://www.twitch.tv/guyroyse https://www.youtube.com/channel/UCNt5SDc6LosO41E77jr59cQ https://x.com/guyroyse https://www.linkedin.com/in/groyse https://2024.connect.tech/session/693665 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Guy Royse.
工程師都宅宅的不太會講話? 其實工程師的幹話多到你聽不下去! ------ 加入粉絲團留言互動! https://www.facebook.com/%E5%B7%A5%E7%A8%8B%E5%B8%AB%E8%81%8A%E4%BB%80%E9%BA%BC-109229084578194 ------ softwaretalkthreesmall@gmail.com -- Hosting provided by SoundOn
Antirez has returned to Redis! Yes, Salvatore Sanfilippo (aka Antirez), the creator of Redis has returned to Redis and he joined us to share the backstory on Redis, what's going on with the tech and the company, the possible (likely) move back to open source via the AGPL license, the new possibilities of AI and vector embeddings in Redis, and some good 'ol LLM inference discussions.
In this episode of Code with Jason, host Jason Swett interviews Prarthana Shiva, a senior software engineer at NexHealth, who shares how her team is handling massive database scaling challenges. Prarthana explains their PostgreSQL database's growth to 24 terabytes (with projections to triple within a year) and details their innovative solutions including read replicas, Elasticsearch implementation, Redis caching, external write-ahead logs, and optimized vacuuming processes. The conversation also touches on Jason's own database challenges with his CI platform and concludes with Prarthana's upcoming presentation at Sin City Ruby 2025, where she'll discuss their transition from schema-based to row-based multi-tenancy for better scalability.Prarthana Shiva on LinkedInSin City Ruby
Antirez has returned to Redis! Yes, Salvatore Sanfilippo (aka Antirez), the creator of Redis has returned to Redis and he joined us to share the backstory on Redis, what's going on with the tech and the company, the possible (likely) move back to open source via the AGPL license, the new possibilities of AI and vector embeddings in Redis, and some good 'ol LLM inference discussions.
Redis is consistently one of the most beloved pieces of infrastructure for developers. And in the last few years, we've seen a number of new Redis-compatible projects that aim to improve on the core of Redis in some way. One of those projects is DragonflyDB, a multi-threaded version of Redis that allows for significantly higher throughput on a single instance. Roman Gershman is the co-founder and CTO at Dragonfly, and he has a fascinating background. Roman initially worked at Google and then was a frustrated user of Redis while working as an engineer at a fast-growing startup. He did a stint on the ElastiCache team at AWS but struck off on his own to make a new, faster version of Redis. In this episode, we talk through the improvements that Dragonfly makes to Redis and why it matters to high-scale users. We go through the different needs and requirements of high-scale cache applications and what Roman learned at AWS. We also go through the Redis licensing drama and how to attract developer attention in 2025.
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinMembership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA------------------------------------------------------------------------------------------------------------------------------------------------------------------About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------In this conversation, Unmesh Joshi discusses the patterns of distributed systems. He emphasizes the importance of understanding the context in which patterns are applied, the need to read code to grasp their implementation, and the common pitfalls that developers face when applying patterns without a clear understanding of the underlying problems. Chapters00:00 Introduction to Distributed Systems and Patterns05:39 Understanding Patterns in Distributed Systems19:23 Bridging Theory and Practice in Distributed Systems28:56 The Role of Developers in Understanding Patterns31:58 Understanding Patterns in Software Development40:58 The Human Aspect of Software Design44:37 Iterative Development and Real-World Applications49:03 The Future of Patterns in Cloud-Native Systems55:07 Common Misunderstandings of Distributed PatternsInteresting quotes:"Patterns capture wisdom of generations.""Reading code is the best way to understand.""Patterns help you see beyond abstractions.""Understanding patterns helps bridge the gap.""Expert generalists can operate across verticals.""There are no simple systems in the cloud era.""Patterns can add complexity if misunderstood.""Patterns are always useful within a context.""Design and development are human activities.""The deconstruction of databases is happening.""Paxos is the most misunderstood pattern."Unmesh Joshi :https://in.linkedin.com/in/unmesh-joshi-9487635Catalog of Patterns: https://martinfowler.com/articles/patterns-of-distributed-systems/I hope you liked the episode, if you did please like, share and subscribe. ------------------------------------------------------------------------------------------------------------------------------------------------------------------Like building real stuff?------------------------------------------------------------------------------------------------------------------------------------------------------------------Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator------------------------------------------------------------------------------------------------------------------------------------------------------------------Link to other playlists. LIKE, SHARE and SUBSCRIBE------------------------------------------------------------------------------------------------------------------------------------------------------------------If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet.Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!#distributedsystems #patterns #softwarearchitecture #consensus #algorithms #coding #patterns #softwaredevelopment #ThoughtWorks #softwareengineering #cloud #computing #software
Is your app feeling sluggish? Scott and Wes break down the biggest performance bottlenecks—like bloated assets, slow databases, and waterfall requests—and share easy wins to make your site feel lightning fast. From smarter caching to preloading tricks, these tips will have your app zipping along in no time! Show Notes 00:00 Welcome to Syntax! 00:58 Brought to you by Sentry.io. 02:01 What makes apps slow? 02:10 Loading too much. 03:26 Slow database work. 04:04 Slow server. 04:54 Waterfall requests. 06:34 How do I know what is slow? 06:45 Web vitals. 12:50 Streaming. 14:05 Network tab. 18:18 Performance tab. 22:53 Caching. 22:59 Client-side caching. 23:38 Server-side caching. Valkey.io. Redis.io. 25:40 Local data. 26:11 Gzip. 29:23 CDN. 30:57 Images. Cloudinary. Cloudflare Images. Imgix. Vercel Images. 31:08 Serving. 34:16 Compressing. 35:06 Ship fewer images. 35:50 Loading JS. Async vs Defer Attributes. 37:00 CSS. 38:28 Preloading & Prefetch. 39:40 Preloading on hover. 41:44 Ship less code. 43:49 Icons Nucleo App. 47:01 Fonts Tolin.ski. 51:13 Sick Picks + Shameless Plugs. Sick Picks Scott: Skywalkers on Netflix. Wes: Oxo Swivel Peeler. Shameless Plugs Scott: Syntax on YouTube. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
The Learn System Design podcast delves into the intricacies of designing a parking lot system, a topic often encountered in technical interviews, especially at large tech companies. The host, Ben Kitchell, begins by providing context from previous discussions, particularly regarding the importance of atomicity and redundancy in system design. He emphasizes the need for a reliable and scalable architecture that can handle real-time reservations and payments, illustrating the challenges of maintaining consistency in a distributed environment. The episode outlines critical functional requirements such as user authentication, reservation capabilities, and payment processing, while also addressing non-functional requirements like security and latency. Throughout the discussion, Ben explores the CAP theorem, highlighting the trade-offs between consistency and availability. He advocates for prioritizing consistency in this specific use case—parking reservations—because allowing multiple users to occupy the same spot would lead to significant user dissatisfaction. The episode also covers capacity estimates, proposing a realistic user base and discussing storage needs, which ultimately lead to considerations for database modeling. Ben suggests utilizing a relational database for its inherent relationships between users, vehicles, and reservations, ensuring data integrity and efficient querying. Furthermore, the podcast dives into the technical architecture of the system, advocating for a modular approach with dedicated services for user management, vehicle handling, parking spot management, and payment processing. Ben proposes the use of Redis for distributed locking to manage concurrency effectively, ensuring that users cannot double-book parking spots. He concludes with a discussion on scaling strategies and the importance of designing systems that can evolve with changing demands. This episode serves as a comprehensive guide for engineers looking to deepen their understanding of system design while preparing for real-world application and technical interviews.Takeaways: Building a parking lot system requires a focus on core functional requirements like user reservation and payment. Using a distributed locking mechanism, such as Redis, can help maintain consistency in concurrent transactions. Non-functional requirements such as security and low latency are critical for user satisfaction. Estimating capacity for the system is important; 100,000 users a month is a realistic start. A structured database model with tables for users, vehicles, reservations, and spots is essential for functionality. Designing for scalability involves separating services and using load balancers to manage traffic effectively. Companies mentioned in this episode: Amazon AWS DynamoDB Zookeeper Stripe Raising Cane's Chick Fil A Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ In this episode of the Geek Narrator podcast, host Kaivalya Apte interviews Marc Brooker, a distinguished engineer at AWS, about Aurora D-SQL. They discuss Marc's journey at AWS, the evolution of Aurora D-SQL, and the customer-centric approach that led to its development. Marc explains the choice of PostgreSQL as the foundation for DSQL, the architecture of the database, and the importance of snapshot isolation and concurrency control. The conversation goes into the technical aspects of DSQL, including the write process and how atomicity is maintained, providing listeners with a comprehensive understanding of this innovative database solution. This conversation also goes deep into the intricacies of database design, focusing on fault tolerance, replication strategies, and the role of Firecracker VMs in enhancing scalability. Marc Brooker discusses the architecture of Aurora D-SQL, emphasizing the importance of transaction management, the challenges of active-active deployments, and the trade-offs involved in database design. The discussion also highlights various use cases for Aurora DSQL, including its suitability for micro-services and serverless architectures, while addressing scenarios where it may not be the best fit. Chapters 00:00 Introduction to Aurora DSQL and Marc Brooker's Journey 03:38 The Evolution of Aurora DSQL at AWS 09:24 Customer-Centric Development and Technological Enablers 12:50 Why PostgreSQL? The Choice Behind DSQL 16:39 High-Level Architecture of DSQL 22:07 Understanding Snapshot Isolation and Concurrency Control 28:45 The Write Process and Atomicity in DSQL 38:50 Designing Fault Tolerance in Databases 47:38 Replication and Transaction Commit Strategies 54:35 Active-Active Deployment and Fault Tolerance 01:00:14 Role of Firecracker VM in Scalability 01:09:27 Use Cases and Trade-offs of Aurora D-SQL Marc's Blog: https://brooker.co.za/blog/ Marc on Aurora DSQL : https://brooker.co.za/blog/2024/12/03/aurora-dsql.html AWS's documentation on Aurora DSQL : https://aws.amazon.com/rds/aurora/dsql/features/ ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #sql #postgres #databasesystems #aws #awsdevelopers #spanner #google #cockroachdb #yugabytedb #cap #scalability #WAL #DistributedSystems #Cloud #aurora
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Hey folks - In this episode we have Jelte with us, who is the main contributor to the pg_duckdb project, which is a postgres extension to add the #duckdb power to our beloved #postgresql. We will try to understand how it works? Why is it needed and what's the future of pg_duckdb? If you love #Postgres or #Duckdb or just understanding #database internals then this episode will give you pretty solid insights into Postgres query processing, Duckdb analytics, Postgres extension ecosystem and so on. Basics: pg_duckdb is a Postgres extension that embeds DuckDB's columnar-vectorized analytics engine and features into Postgres. We recommend using pg_duckdb to build high performance analytics and data-intensive applications. Chapters: 00:00 Introduction to PG-DuckDB 03:40 Understanding the Integration of DuckDB with Postgres 06:23 Architecture of PG-DuckDB: Query Processing Explained 10:02 Configuring DuckDB for Analytics Queries 15:37 Managing Workloads: Transactional vs. Analytical 21:02 Observability and Debugging in DuckDB 25:58 Data Deletion and GDPR Compliance 30:46 Schema Management and Migration Challenges 33:14 Managing Schema Changes in Databases 35:21 Upgrading Database Extensions 36:33 Enhancing Data Reading Methods 38:33 Future Features and Improvements 45:54 Use Cases for PGDuckDB 50:03 Challenges in Building the Extension 55:25 Getting Involved with PGDuckDB Important links: The duckdb discord server, which has a pg_duckdb channel inside it: https://discord.duckdb.org/ repo: https://github.com/duckdb/pg_duckdb good-first-issue issues: https://github.com/duckdb/pg_duckdb/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22 ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning! #sql #postgres #databasesystems
Paul Frazee is the CTO of Bluesky. He previously worked on the Beaker browser and the peer-to-peer social media protocol Secure Scuttlebutt. Paul discusses how Bluesky and ATProto got started, scaling up a social media site, what makes ATProto decentralized, lessons ATProto learned from previous peer-to-peer projects, and the challenges of content moderation. Episode transcript available here. My Bluesky profile. -- Related Links Bluesky ATProtocol ATProto for distributed systems engineers Bluesky and the AT Protocol: Usable Decentralized Social Media Decentralized Identifiers (DIDs) ActivityPub Webfinger Beaker web browser Secure Scuttlebutt -- Transcript You can help correct transcripts on GitHub. [00:00:00] Jeremy: Today I am talking to Paul Frazee. He's the current CTO of bluesky, and he previously worked on other decentralized applications like Beaker and Secure Scuttlebutt. [00:00:15] Paul: Thanks for having me. What's bluesky [00:00:16] Jeremy: For people who aren't familiar with bluesky, what is it? [00:00:20] Paul: So bluesky is an open social network, simplest way to put it, designed in particular for high scale. That's kind of one of the big requirements that we had when we were moving into it. and it is really geared towards making sure that the operation of the social network is open amongst multiple different organizations. [00:00:44] So we're one of the operators, but other folks can come in, spin up the software, all the open source software, and essentially have a full node with a full copy of the network active users and have their users join into our network. And they all work functionally as one shared application. [00:01:03] Jeremy: So it, it sounds like it's similar to Twitter but instead of there being one Twitter, there could be any number and there is part of the underlying protocol that allows them to all connect to one another and act as one system. [00:01:21] Paul: That's exactly right. And there's a metaphor we use a lot, which is comparing to the web and search engines, which actually kind of matches really well. Like when you use Bing or Google, you're searching the same web. So on the AT protocol on bluesky, you use bluesky, you use some alternative client or application, all the same, what we're we call it, the atmosphere, all one shared network, [00:01:41] Jeremy: And more than just the, the client. 'cause I think sometimes when people think of a client, they'll think of, I use a web browser. I could use Chrome or Firefox, but ultimately I'm connecting to the same thing. But it's not just people running alternate clients, right? [00:01:57] Paul: Their own full backend to it. That's right. Yeah. Yeah. The anchoring point on that being the fire hose of data that runs the entire thing is open as well. And so you start up your own application, you spin up a service that just pipes into that fire hose and taps into all the activity. History of AT Protocol [00:02:18] Jeremy: Talking about this underlying protocol maybe we could start where this all began so people get some context for where this all came from. [00:02:28] Paul: For sure. All right, so let's wind the clock back here in my brain. We started out 2022, right at the beginning of the year. We were formed as a, essentially a consulting company outside of Twitter with a contract with Twitter. And, uh, our goal was to build a protocol that could run, uh, Twitter, much like the way that we just described, which set us up with a couple of pretty specific requirements. [00:02:55] For one, we had to make sure that it could scale. And so that ended up being a really important first requirement. and we wanted to make sure that there was a strong kind of guarantees that the network doesn't ever get captured by any one operator. The idea was that Twitter would become the first, uh, adopter of the technology. [00:03:19] Other applications, other services would begin to take advantage of it and users would be able to smoothly migrate their accounts in between one or the other at any time. Um, and it's really, really anchored in a particular goal of just deconstructing monopolies. Getting rid of those moats that make it so that there's a kind of a lack of competition, uh, between these things. [00:03:44] And making sure that, if there was some kind of reason that you decided you're just not happy with what direction this service has been going, you move over to another one. You're still in touch with all the folks you were in touch with before. You don't lose your data. You don't lose your, your your follows. Those were the kind of initial requirements that we set out with. The team by and large came from, the decentralized web, movement, which is actually a pretty, large community that's been around since, I wanna say around 2012 is when we first kind of started to form. It got really made more specifically into a community somewhere around 2015 or 16, I wanna say. [00:04:23] When the internet archives started to host conferences for us. And so that gave us kind of a meeting point where all started to meet up there's kind of three schools of thought within that movement. There was the blockchain community, the, federation community, and the peer-to-peer community. [00:04:43] And so blockchain, you don't need to explain that one. You got Federation, which was largely ActivityPub Mastodon. And then peer-to-peer was IPFS, DAT protocol, um, secure scuttlebutt. But, those kinds of BitTorrent style of technologies really they were all kind of inspired by that. [00:05:02] So these three different kind of sub communities we're all working, independently on different ways to attack how to make these open applications. How do you get something that's a high scale web application without one corporation being the only operator? When this team came together in 2022, we largely sourced from the peer-to-peer group of the decentralized community. Scaling limitations of peer-to-peer [00:05:30] Paul: Personally, I've been working in the space and on those kinds of technologies for about 10 years at that stage. And, the other folks that were in there, you know, 5-10 each respectively. So we all had a fair amount of time working on that. And we had really kind of hit some of the limitations of doing things entirely using client devices. We were running into challenges about reliability of connections. Punching holes to the individual device is very hard. Synchronizing keys between the devices is very hard. Maintaining strong availability of the data because people's devices are going off and on, things like that. Even when you're using the kind of BitTorrent style of shared distribution, that becomes a challenge. [00:06:15] But probably the worst challenge was quite simply scale. You need to be able to create aggregations of a lot of behavior even when you're trying to model your application as largely peer wise interactions like messaging. You might need an aggregation of accounts that even exist, how do you do notifications reliably? [00:06:37] Things like that. Really challenging. And what I was starting to say to myself by the end of that kind of pure peer-to-peer stent was that it can't be rocket science to do a comment section. You know, like at some point you just ask yourself like, how, how hard are we willing to work to, to make these ideas work? [00:06:56] But, there were some pretty good pieces of tech that did come out of the peer-to-peer world. A lot of it had to do with what I might call a cryptographic structure. things like Merkel trees and advances within Merkel Trees. Ways to take data sets and reduce them down to hashes so that you can then create nice signatures and have signed data sets at rest at larger scales. [00:07:22] And so our basic thought was, well, all right, we got some pretty good tech out of this, but let's drop that requirement that it all run off of devices. And let's get some servers in there. And instead think of the entire network as a peer-to-peer mesh of servers. That's gonna solve your scale problem. [00:07:38] 'cause you can throw big databases at it. It's gonna solve your availability problems, it's gonna solve your device sync problems. But you get a lot of the same properties of being able to move data sets between services. Much like you could move them between devices in the peer-to-peer network without losing their identifiers because you're doing this in direction of, cryptographic identifiers to the current host. [00:08:02] That's what peer-to-peer is always doing. You're taking like a public key or hash and then you're asking the network, Hey, who has this? Well, if you just move that into the server, you get the same thing, that dynamic resolution of who's your active host. So you're getting that portability that we wanted real bad. [00:08:17] And then you're also getting that kind of in meshing of the different services where each of them is producing these data sets that they can sink from each other. So take peer-to-peer and apply it to the server stack. And that was our kind of initial thought of like, Hey, you know what? This might work. [00:08:31] This might solve the problems that we have. And a lot of the design fell out from that basic mentality. Crytographic identifiers and domain names [00:08:37] Jeremy: When you talk about these cryptographic identifiers, is the idea that anybody could have data about a person, like a message or a comment, and that could be hosted different places, but you would still know which person that originally came from. Is that, is that the goal there? [00:08:57] Paul: That's exactly it. Yeah. Yeah. You wanna create identification that supersedes servers, right? So when you think about like, if I'm using Twitter and I wanna know what your posts are, I go to twitter.com/jeremy, right? I'm asking Twitter and your ID is consequently always bound to Twitter. You're always kind of a second class identifier. [00:09:21] We wanted to boost up the user identifier to be kind of a thing freestanding on its own. I wanna just know what Jeremy's posts are. And then once you get into the technical system it'll be designed to figure out, okay, who knows that, who can answer that for you? And we use cryptographic identifiers internally. [00:09:41] So like all the data sets use these kind of long URLs to identify things. But in the application, the user facing part, we used domain names for people. Which I think gives the picture of how this all operates. It really moves the user accounts up into a free standing first class identifier within the system. [00:10:04] And then consequently, any application, whatever application you're using, it's really about whatever data is getting put into your account. And then that just exchanges between any application that anybody else is using. [00:10:14] Jeremy: So in this case, it sounds like the identifier is some long string that, I'm not sure if it's necessarily human readable or not. You're shaking your head no. [00:10:25] Paul: No. [00:10:26] Jeremy: But if you have that string, you know it's for a specific person. And since it's not really human readable, what you do is you put a layer on top of it which in this case is a domain that somebody can use to look up and find the identifier. [00:10:45] Paul: Yeah, yeah, yeah. So we just use DNS. Put a TXT record in there, map into that long string, or you could do a .well-known file on a web server if that's more convenient for you. And then the ID that's behind that, the non-human readable one, those are called DIDs which is actually a W3C spec. Those then map to a kind of a certificate. What you call a DID document that kind of confirms the binding by declaring what that domain name should be. So you get this bi-directional binding. And then that certificate also includes signing keys and active servers. So you pull down that certificate and that's how the discovery of the active server happens is through the DID system. What's stored on a PDS [00:11:29] Jeremy: So when you refer to an active server what is that server and what is that server storing? [00:11:35] Paul: It's kinda like a web server, but instead of hosting HTML, it's hosting a bunch of JSON records. Every user has their own document store of JSON documents. It's bucketed into collections. Whenever you're looking up somebody on the network you're gonna get access to that repository of data, jump into a collection. [00:11:58] This collection is their post collection. Get the rkey (Record Key), and then you're pulling out JSON at the end of it, which is just a structured piece of stuff saying here's the CreatedAt, here's the text, here's the type, things like that. One way you could look at the whole system is it's a giant, giant database network. Servers can change, signing keys change, but not DID [00:12:18] Jeremy: So if someone's going to look up someone's identifier, let's say they have the user's domain they have to go to some source, right? To find the user's data. You've mentioned, I think before, the idea that this is decentralized and by default I would, I would picture some kind of centralized resource where I send somebody a domain and then they give me back the identifier and the links to the servers. [00:12:46] So, so how does that work in practice where it actually can be decentralized? [00:12:51] Paul: I mentioned that your DID that non-human readable identifier, and that has that certificate attached to it that lists servers and signing keys and things like that. [00:13:00] So you're just gonna look up inside that DID document what that server is your data repository host. And then you contact that guy and say, all right, I'm told you're hosting this thing. Here's the person I'm looking for, hand over the hand over the data. It's really, you know, pretty straightforward. [00:13:18] The way that gets decentralized is by then to the fact that I could swap out that active server that's in my certificate and probably wanna rotate the signing keys 'cause I've just changed the, you know. I don't want to keep using the same signing keys as I was using previously because I just changed the authority. [00:13:36] So that's the migration change, change the hosting server, change out the signing keys. Somebody that's looking for me now, they're gonna load up my document, my DID document. They're gonna say, okay, new server, new keys. Pull down the data. Looks good, right? Matches up with the DID doc. [00:13:50] So that's how you get that level of portability. But when those changes happen, the DID doesn't change, right? The DID document changes. So there's the level of indirection there and that's pretty important because if you don't have a persistent identifier whenever you're trying to change out servers, all those backlinks are gonna break. [00:14:09] That's the kind of stuff that stops you from being able to do clean migrations on things like web-based services. the only real option is to go out and ask everybody to update their data. And when you're talking about like interactions on the social network, like people replying to each other, there's no chance, right? [00:14:25] Every time somebody moves you're gonna go back and modify all those records. You don't even control all the records from the top down 'cause they're hosted all over the web. So it's just, you can't do it. Generally we call this account portability, that you're kinda like phone number portability that you can change your host, but, so that part's portable, but the ID stays the same. [00:14:45] And keeping that ID the same is the real key to making sure that this can happen without breaking the whole system. [00:14:52] Jeremy: And so it, it sounds like there's the decentralized id, then there's the decentralized ID document that's associated with that points you to where the actual location of your, your data, your posts, your pictures and whatnot. but then you also mentioned that they could change servers. [00:15:13] So let's say somebody changes where their data is, is stored, that would change the servers, I guess, in their document. But [00:15:23] then how do all of these systems. Know okay. I need to change all these references to your old server, to these new servers, [00:15:32] Paul: Yeah. Well, the good news is that you only have to, you, you got the public data set of all the user's activity, and then you have like internal caches of where the current server is. You just gotta update those internal caches when you're trying to contact their server. Um, so it's actually a pretty minimal thing to just like update like, oh, they moved, just start talking to update my, my table, my Redis, that's holding onto that kind of temporary information, put it on ttl, that sort of thing. Most communication won't be between servers, it will be from event streams [00:16:01] Paul: And, honestly, in practice, a fair amount of the system for scalability reasons doesn't necessarily work by servers directly contacting each other. It's actually a little bit more like how, I told you before, I'm gonna use this metaphor a lot, the search engines with the web, right? What we do is we actually end up crawling the repositories that are out in the world and funneling them into event streams like a Kafka. And that allows the entire system to act like a data processing pipeline where you're just tapping into these event streams and then pushing those logs into databases that produce these large scale aggregations. [00:16:47] So a lot of the application behavior ends up working off of these event logs. If I reply to somebody, for instance, I don't necessarily, it's not, my server has to like talk to your server and say, Hey, I'm replying to you. What I do is I just publish a reply in my repository that gets shot out into the event logs, and then these aggregators pick up that the reply got created and just update their database with it. [00:17:11] So it's not that our hosting servers are constantly having to send messages with each other, you actually use these aggregators to pull together the picture of what's happening on the network. [00:17:22] Jeremy: Okay, so like you were saying, it's an event stream model where everybody publishes the events the things that they're doing, whether that's making a new post, making a reply, that's all being posted to this event stream. And then everybody who provides, I'm not sure if instances is the right term, but an implementation of the atmosphere protocol (Authenticated Transfer protocol). [00:17:53] They are listening for all those changes and they don't necessarily have to know that you moved servers because they're just listening for the events and you still have the same identifier. [00:18:10] Paul: Generally speaking. Yeah. 'cause like if you're listening to one of these event streams what you end up looking for is just the signature on it and making sure that the signature matches up. Because you're not actually having to talk to their live server. You're just listening to this relay that's doing this aggregation for you. [00:18:27] But I think actually to kind of give a little more clarity to what you're talking about, it might be a good idea to refocus how we're talking about the system here. I mentioned before that our goal was to make a high scale system, right? We need to handle a lot of data. If you're thinking about this in the way that Mastodon does it, the ActivityPub model, that's actually gonna give you the wrong intuition. Designing the protocol to match distributed systems practices (Event sourcing / Stream processing) [00:18:45] Paul: 'cause we chose a dramatically different system. What we did instead was we picked up, essentially the same practices you're gonna use for a data center, a high scale application data center, and said, all right, how do you tend to build these sorts of things? Well, what you're gonna do is you're gonna have, multiple different services running different purposes. [00:19:04] It gets pretty close to a microservices approach. You're gonna have a set of databases, and then you're going to, generally speaking for high scale, you're gonna have some kind of a kafka, some kind of a event log that you are tossing changes about the state of these databases into. And then you have a bunch of secondary systems that are tapping into the event log and processing that into, the large scale, databases like your search index, your, nice postgres of user profiles. [00:19:35] And that makes sure that you can get each of these different systems to perform really well at their particular task, and then you can detach them in their design. for instance, your primary storage can be just a key value store that scales horizontally. And then on the event log, you, you're using a Kafka that's designed to handle. [00:19:58] Particular semantics of making sure that the messages don't get dropped, that they come through at a particular throughput. And then you're using, for us, we're using like ScyllaDB for the big scale indexes that scales horizontally really well. So it's just different kind of profiles for different pieces. [00:20:13] If you read Martin Kleppman's book, data Intensive applications I think it's called or yeah. A lot of it gets captured there. He talks a lot about this kind of thing and it's sometimes called a kappa architecture is one way this is described, event sourcing is a similar term for it as well. [00:20:30] Stream processing. That's pretty standard practices for how you would build a traditional high scale service. so if you take, take this, this kind of microservice architecture and essentially say, okay, now imagine that each of the services that are a part of your data center could be hosted by anybody, not just within our data center, but outside of our data center as well and should be able to all work together. [00:20:57] Basically how the AT Proto is designed. We were talking about the data repository hosts. Those are just the primary data stores that they hold onto the user keys and they hold onto those JSON records. And then we have another service category we call Relay that just crawls those data repositories and sucks that in that fire hose of data we were talking about that event log. App views pull data from relay and produces indexes and threads [00:21:21] Paul: And then we have what we call app views that sit there and tail the index and tail the log, excuse me, and produce indexes off of it, they're listening to those events and then like, making threads like okay, that guy posted, that guy replied, that guy replied. [00:21:37] That's a thread. They assemble it into that form. So when you're running an application, you're talking to the AppView to read the network, and you're talking to the hosts to write to the network, and each of these different pieces sync up together in this open mesh. So we really took a traditional sort of data center model and just turned it inside out where each piece is a part of the protocol and communicate it with each other and therefore anybody can join into that mesh. [00:22:07] Jeremy: And to just make sure I am tracking the data repository is the data about the user. So it has your decentralized identifier, it has your replies, your posts, And then you have a relay, which is, its responsibility, is to somehow find all of those data repositories and collect them as they happen so that it can publish them to some kind of event stream. [00:22:41] And then you have the AppView which it's receiving messages from the relay as they happen, and then it can have its own store and index that for search. It can collect them in a way so that it can present them onto a UI. That's sort of thing that's the user facing part I suppose. [00:23:00] Paul: Yeah, that's exactly it. And again, it's, it's actually quite similar to how the web works. If you combine together the relay and the app view, you got all these different, you know, the web works where you got all these different websites, they're hosting their stuff, and then the search engine is going around, aggregating all that data and turning it into a search experience. [00:23:19] Totally the same model. It's just being applied to, more varieties of data, like structured data, like posts and, and replies, follows, likes, all that kinda stuff. And then instead of producing a search application at the end. I mean, it does that too, but it also produces a, uh, you know, timelines and threads and, um, people's profiles and stuff like that. [00:23:41] So it's actually a pretty bog standard way of doing, that's one of the models that we've seen work for large scale decentralized systems. And so we're just transposing it onto something that kind of is more focused towards social applications [00:23:58] Jeremy: So I think I'm tracking that the data repository itself, since it has your decentralized identifier and because the data is cryptographically signed, you know, it's from a specific user. I think the part that I am still not quite sure about is the relays. I, I understand if you run all the data repositories, you know where they are, so you know how to collect the data from them. [00:24:22] But if someone's running another system outside of your organization, how do they find, your data repositories? Or do they have to connect to your relay? What's the intention for that? Data hosts request relays to pull their data [00:24:35] Paul: That logic runs, again, really similar to how search engines find out about websites. So there is actually a way for, one of these, data hosts to contact Relay and say, Hey, I exist. You know, go ahead and get my stuff. And then it'll be up to the relay to decide like if they want it or not. [00:24:52] Right now, generally we're just like, yeah, you know, we, we want it. But as you can imagine, as the thing matures and gets to higher scale, there might be some trust kind of things to worry about, you know? So that's kind of the naive operation that currently exists. But over time as the network gets bigger and bigger, it'll probably involve some more traditional kind of spiraling behaviors because as more relays come into the system, each of these hosts, they're not gonna know who to talk to. Relays can bootstrap who they know about by talking to other relays [00:25:22] Paul: You're trying to start a new relay. What they're gonna do is they're going to discover all of the different users that exist in the system by looking at what data they have to start with. Probably involve a little bit of a manual feeding in at first, whenever I'm starting up a relay, like, okay, there's bluesky's relay. [00:25:39] Lemme just pull what they know. And then I go from there. And then anytime you discover a new user you don't have, you're like, oh, I wanna look them up. Pull them into the relay too. Right. So there's a, pretty straightforward, discovery process that you'll just have to bake into a relay to, to make sure you're calling as much the network as possible. ActivityPub federation vs AT Proto [00:25:57] Jeremy: And so I don't think we've defined the term federation, but maybe you could explain what that is and if that is what this is. [00:26:07] Paul: We are so unsure. [00:26:10] Jeremy: Okay. [00:26:11] Paul: Yeah. This has jammed is up pretty bad. Um, because I think everybody can, everybody pretty strongly agrees that ActivityPub is federation, right? and ActivityPub kind of models itself pretty similarly to email in a way, like the metaphors they use is that there's inboxes and outboxes and, and every ActivityPub server they're standing up the full vertical stack. [00:26:37] They set up, the primary hosting, the views of the data that's happening there. the interface for the application, all of it, pretty traditional, like close service, but then they're kind of using the perimeter. they're making that permeable by sending, exchanging, essentially mailing records to each other, right? [00:26:54] That's their kind of logic of how that works. And that's pretty much in line with, I think, what most people think of with Federation. Whereas what we're doing isn't like that we've cut, instead of having a bunch of vertical stacks communicating horizontally with each other, we kind of sliced in the other direction. [00:27:09] We sliced horizontally into, this microservices mesh and have all the different, like a total mix and match of different microservices between different operators. Is that federation? I don't know. Right. we tried to invent a term, didn't really work, you know, At the moment, we just kind of don't worry about it that much, see what happens, see what the world sort of has to say to us about it. [00:27:36] and beyond that, I don't know. [00:27:42] Jeremy: I think people probably are thinking of something like, say, a Mastodon instance when you're, when you're talking about everything being included, The webpage where you view the posts, the Postgres database that's keeping the messages. [00:28:00] And that same instance it's responsible for basically everything. [00:28:06] Paul: mm-Hmm [00:28:06] Jeremy: And I believe what you're saying is that the difference with, the authenticated transfer protocol, is that the [00:28:15] Paul: AT Protocol, Yep. [00:28:17] Jeremy: And the difference there is that you've, at the protocol level, you've split it up into the data itself, which can be validated completely separately from other parts of the system. [00:28:31] You could have the JSON files on your hard drive and somebody else can have that same JSON file and they would know that who the user is and that these are real things that user posted. That's like a separate part. And then the relay component that looks for all these different repositories that has people's data, that can also be its own independent thing where its job is just to output events. [00:29:04] And that can exist just by itself. It doesn't need the application part, the, the user facing part, it can just be this event stream on itself. and that's the part where it sounds like you can make decisions on who to, um, collect data from. I guess you have to agree that somebody can connect to you and get the users from your data repositories. [00:29:32] And likewise, other people that run relays, they also have to agree to let you pull the users from theirs. [00:29:38] Paul: Yeah, that's right. Yeah. [00:29:41] Jeremy: And so I think the Mastodon example makes sense. And, but I wonder if the underlying ActivityPub protocol forces you to use it in that way, in like a whole full application that talks to another full application. [00:29:55] Or is it more like that's just how people tend to use it and it's not necessarily a characteristic of the protocol. [00:30:02] Paul: Yeah, that's a good question actually. so, you know, generally what I would say is pretty core to the protocol is the expectations about how the services interact with each other. So the mailbox metaphor that's used in ActivityPub, that design, if I reply to you, I'll update my, local database with what I did, and then I'll send a message over to your server saying, Hey, by the way, add this reply. [00:30:34] I did this. And that's how they find out about things. That's how they find out about activity outside of their network. that's the part that as long as you're doing ActivityPub, I suspect you're gonna see reliably happening. That's that, I can say for sure that's a pretty tight requirement. [00:30:50] That's ActivityPub. If you wanted to split it up the way we're talking about, you could, I don't know, I don't know if you necessarily would want to. Because I don't know. That's actually, I think I'd have to dig into their stack a little bit more to see how meaningful that would be. I do know that there's some talk of introducing a similar kind of an aggregation method into the ActivityPub world which I believe they're also calling a relay and to make things even more complicated. [00:31:23] And NOSTR has a concept of a relay. So these are three different protocols that are using this term. I think you could do essentially what a search engine does on any of these things. You could go crawling around for the data, pull them into a fire hose, and then, tap into that aggregation to produce, bigger views of the network. [00:31:41] So that principle can certainly apply anywhere. AT Protocol, I think it's a little bit, we, we focused in so hard from that on that from the get go, we focus really hard on making sure that this, the data is, signed at rest. That's why it's called the authenticated transfer protocol. And that's a nice advantage to have when you're running a relay like this because it means that you don't have to trust the relay. [00:32:08] Like generally speaking, when I look at results from Google, you know, I'm trusting pretty well that they're accurately reflecting what's on the website, which is fine. You know, there's, that's not actually a huge risk or anything. But whenever you're trying to build entire applications and you're using somebody else's relay, you could really run into things where they say like, oh, you know what Paul tweeted the other day, you know, I hate dogs. [00:32:28] They're like, no, I didn't. That's a lie, right? You just sneak in Little lies like that over a while, it becomes a problem. So having the signatures on the data is pretty important. You know, if you're gonna be trying to get people to cooperate, uh, you gotta manage the trust model. I know that ActivityPub does have mechanisms for signed records. Issuers with ActivityPub identifiers [00:32:44] Paul: I don't know how deep they go if they could fully replace that, that utility. and then Mastodon ActivityPub, they also use a different identifier system that they're actually taking a look at DIDs um, right now, I don't know what's gonna happen there. We're, we're totally on board to, you know, give any kind of insight that we got working on 'em. [00:33:06] But at, at the moment, they use I think it's WebFinger based identifiers they look like emails. So you got host names in there and those identifiers are being used in the data records. So you don't get that continuous identifier. They actually do have to do that hey, I moved update your records sort of thing. [00:33:28] And that causes it to, I mean, it works like decently well, but not as well as it could. They got us to the point where it moves your profile over and you update all the folks that were following you so they can update their follow records, but your posts, they're not coming right, because that's too far into that mesh of interlinking records. [00:33:48] There's just no chance. So that's kind of the upper limit on that, it's a different set of choices and trade-offs. You're always kind of asking like, how important is the migration? Does that work out? Anyway, now I'm just kind of digging into differences between the two here. Issues with an identifier that changes and updating old records [00:34:07] Jeremy: So you were saying that with ActivityPub, all of the instances need to be notified that you've changed your identifier but then all of the messages that they had already received. They don't point to the new identifier somehow. [00:34:24] Paul: Yeah. You run into basically just the practicalities of actual engineering with that is what happens, right? Because if you imagine you got a multimillion user social network. They got all their posts. Maybe the user has like, let's say a thousand posts and 10,000 likes. And that, activity can range back three years. [00:34:48] Let's say they changed their identifier, and now you need to change the identifier of all those records. If you're in a traditional system that's already a tall order, you're going back and rewriting a ton of indexes, Anytime somebody replied to you, they have these links to your posts, they're now, you've gotta update the identifiers on all of those things. [00:35:11] You could end up with a pretty significant explosion of rewrites that would have to occur. Now that's, that's tough. If you're in a centralized model. If you're in a decentralized one, it's pretty much impossible because you're now, when you notify all the other servers like, Hey, this, this changed. How successful are all of them at actually updating that, that those, those pointers, it's a good chance that there's things are gonna fall out of correctness. that's just a reality of it. And if, so, if you've got a, if you've got a mutable identifier, you're in trouble for migrations. So the DID is meant to keep it permanent and that ends up being the anchoring point. If you lose control of your DID well, that's it. Managing signing keys by server, paper key reset [00:35:52] Paul: Your, your account's done. We took some pretty traditional approaches to that, uh, where the signing keys get managed by your hosting server instead of like trying to, this may seem like really obvious, but if you're from the decentralization community, we spend a lot of time with blockchains, like, Hey, how do we have the users hold onto their keys? [00:36:15] You know, and the tooling on that is getting better for what it's worth. We're starting to see a lot better key pair management in like Apple's ecosystem and Google's ecosystem, but it's still in the range of like, nah, people lose their keys, you know? So having the servers manage those is important. [00:36:33] Then we have ways of exporting paper keys so that you could kind of adversarially migrate if you wanted to. That was in the early spec we wanted to make sure that this portability idea works, that you can always migrate your accounts so you can export a paper key that can override. [00:36:48] And that was how we figured that out. Like, okay, yeah, we don't have to have everything getting signed by keys that are on the user's devices. We just need these master backup keys that can say, you know what? I'm done with that host. No matter what they say, I'm overriding what they, what they think. and that's how we squared that one. [00:37:06] Jeremy: So it seems like one of the big differences with account migration is that with ActivityPub, when you move to another instance, you have to actually change your identifier. [00:37:20] And with the AT protocol you're actually not allowed to ever change that identifier. And maybe what you're changing is just you have say, some kind of a lookup, like you were saying, you could use a domain name to look that up, get a reference to your decentralized identifier, but your decentralized identifier it can never change. [00:37:47] Paul: It, it, it can't change. Yeah. And it shouldn't need to, you know what I mean? It's really a total disaster kind of situation if that happens. So, you know that it's designed to make sure that doesn't happen in the applications. We use these domain name handles to, to identify folks. And you can change those anytime you want because that's really just a user facing thing. [00:38:09] You know, then in practice what you see pretty often is that you may, if you change hosts, if you're using, we, we give some domains to folks, you know, 'cause like not everybody has their own domain. A lot of people do actually, to our surprise, people actually kind of enjoy doing that. But, a lot of folks are just using like paul.bsky.social as their handle. [00:38:29] And so if you migrated off of that, you probably lose that. Like your, so your handle's gonna change, but you're not losing the followers and stuff. 'cause the internal system isn't using paul.bsky.social, it's using that DID and that DID stays the same. Benefits of domain names, trust signal [00:38:42] Jeremy: Yeah. I thought that was interesting about using the domain names, because when you like you have a lot of users, everybody's got their own sub-domain. You could have however many millions of users. Does that become, does that become an issue at some point? [00:39:00] Paul: Well, it's a funny thing. I mean like the number of users, like that's not really a problem 'cause you run into the same kind of namespace crowding problem that any service is gonna have, right? Like if you just take the subdomain part of it, like the name Paul, like yeah, only, you only get to have one paul.bsky.social. [00:39:15] so that part of like, in terms of the number of users, that part's fine I guess. Uh, as fine as ever. where gets more interesting, of course is like, really kind of around the usability questions. For one, it's, it's not exactly the prettiest to always have that B sky.social in there. If we, if we thought we, if we had some kind of solution to that, we would use it. [00:39:35] But like the reality is that, you know, now we're, we've committed to the domain name approach and some folks, you know, they kind of like, ah, that's a little bit ugly. And we're like, yeah that's life. I guess the plus side though is that you can actually use like TLD the domain. It's like on pfrazee.com. [00:39:53] that starts to get more fun. it can actually act as a pretty good trust signal in certain scenarios. for instance, well-known domain names like nytimes.com, strong authentication right there, we don't even need a blue check for it. Uh, similarly the .gov, domain name space is tightly regulated. [00:40:14] So you actually get a really strong signal out of that. Senator Wyden is one of our users and so he's, I think it's wyden.senate.gov and same thing, strong, you know, strong identity signal right there. So that's actually a really nice upside. So that's like positives, negatives. [00:40:32] That trust signal only works so far. If somebody were to make pfrazee.net, then that can be a bit confusing. People may not be paying attention to .com vs .net, so it's not, I don't wanna give the impression that, ah, we've solved blue checks. It's a complicated and multifaceted situation, but, it's got some juice. [00:40:54] It's also kinda nice too, 'cause a lot of folks that are doing social, they're, they've got other stuff that they're trying to promote, you know? I'm pretty sure that, uh, nytimes would love it if you went to their website. And so tying it to their online presence so directly like that is a really nice kind of feature of it. [00:41:15] And tells a I think a good story about what we're trying to do with an open internet where, yeah, everybody has their space on the internet where they can do whatever they want on that. And that's, and then thethese social profiles, it's that presence showing up in a shared space. It's all kind of part of the same thing. [00:41:34] And that that feels like a nice kind of thing to be chasing, you know? And it also kind of speaks well to the naming worked out for us. We chose AT Protocol as a name. You know, we back acronymed our way into that one. 'cause it was a @ simple sort of thing. But like, it actually ended up really reflecting the biggest part of it, which is that it's about putting people's identities at the front, you know, and make kind of promoting everybody from a second class identity that's underneath Twitter or Facebook or something like that. [00:42:03] Up into. Nope, you're freestanding. You exist as a person independently. Which is what a lot of it's about. [00:42:12] Jeremy: Yeah, I think just in general, not necessarily just for bluesky, if people had more of an interest in getting their own domain, that would be pretty cool if people could tie more of that to something you basically own, right? [00:42:29] I mean, I guess you're leasing it from ICANN or whatever, but, [00:42:33] yeah, rather than everybody having an @Gmail, Outlook or whatever they could actually have something unique that they control more or less. [00:42:43] Paul: Yeah. And we, we actually have a little experimental service for registering domain names that we haven't integrated into the app yet because we just kind of wanted to test it out and, and kind of see what that appetite is for folks to register domain names way higher than you'd think we did that early on. [00:43:01] You know, it's funny when you're coming from decentralization is like an activist space, right? Like it's a group of people trying to change how this tech works. And sometimes you're trying to parse between what might come off as a fascination of technologists compared to what people actually care about. [00:43:20] And it varies, you know, the domain name thing to a surprising degree, folks really got into that. We saw people picking that up almost straight away. More so than certainly we ever predicted. And I think that's just 'cause I guess it speaks to something that people really get about the internet at this point. [00:43:39] Which is great. We did a couple of other things that are similar and we saw varied levels of adoption on them. We had similar kinds of user facing, opening up of the system with algorithms and with moderation. And those have both been pretty interesting in and of themselves. Custom feed algorithms [00:43:58] Paul: So with algorithms, what we did was we set that up so that anybody can create a new feed algorithm. And this was kind of one of the big things that you run into whenever you use the app. If you wanted to create a new kind of for you feed you can set up a service somewhere that's gonna tap into that fire hose, right? [00:44:18] And then all it needs to do is serve a JSON endpoint. That's just a list of URLs, but like, here's what should be in that feed. And then the bluesky app will pick that up and, and send that, hydrate in the content of the posts and show that to folks. I wanna say this is a bit of a misleading number and I'll explain why but I think there's about 35,000 of these feeds that have been created. [00:44:42] Now, the reason it's little misleading is that, I mean, not significantly, but it's not everybody went, sat down in their IDE and wrote these things. Essentially one of our users created, actually multiple of our users made little platforms for building these feeds, which is awesome. That's the kinda thing you wanna see because we haven't gotten around to it. [00:44:57] Our app still doesn't give you a way to make these things. But they did. And so lots of, you know, there it is. Cool. Like, one, one person made a kind of a combinatorial logic thing that's like visual almost like scratch, it's like, so if it has this hashtag and includes these users, but not those users, and you're kind of arranging these blocks and that constructs the feed and then probably publish it on your profile and then folks can use it, you know? [00:45:18] And um, so that has been I would say fairly successful. Except, we had one group of hackers do put in a real effort to make a replacement for you feed, like magic algorithmic feed kind of thing. And then they kind of kept up going for a while and then ended up giving up on it. Most of what we see are actually kind of weird niche use cases for feeds. [00:45:44] You get straightforward ones, like content oriented ones like a cat feed, politics feed, things like that. It's great, some of those are using ML detection, so like the cat feed is ML detection, so sometimes you get like a beaver in there, but most of the time it's a cat. And then we got some ones that are kind of a funny, like change in the dynamic of freshness. [00:46:05] So, uh, or or selection criteria, things that you wouldn't normally see. Um, but because they can do whatever they want, you know, they try it out. So like the quiet posters ended up being a pretty successful one. And that one just shows people you're following that don't post that often when they do just those folks. [00:46:21] It ended up being, I use that one all the time because yeah, like they get lost in the noise. So it's like a way to keep up with them. Custom moderation and labeling [00:46:29] Paul: The moderation one, that one's a a real interesting situation. What we did there essentially we wanted to make sure that the moderation system was capable of operating across different apps so that they can share their work, so to speak. [00:46:43] And so we created what we call labeling. And labeling is a metadata layer that exists over the network. Doesn't actually live in the normal data repositories. It uses a completely different synchronization because a lot of these labels are getting produced. It's just one of those things where the engineering characteristics of the labels is just too different from the rest of the system. [00:47:02] So we created a separate synchronization for this, and it's really kind of straightforward. It's, here's a URL and here's a string saying something like NSFW or Gore, or you know, whatever. then those get merged onto the records brought down by the client and then the client, you know, based on the user's preferences. [00:47:21] We'll put like warning screens up, hide it, stuff like that. So yeah, these label streams can then, you know, anybody that's running a moderation service can, you know, are publishing these things and so anybody can subscribe to 'em. And you get that kind of collaborative thing we're always trying to do with this. [00:47:34] And we had some users set up moderation services and so then as an end user you find it, it looks like a profile in the app and you subscribe to it and you configure it and off you go. That one has had probably the least amount of adoption throughout all of 'em. It's you know, moderation. [00:47:53] It's a sticky topic as you can imagine, challenging for folks. These moderation services, they do receive reports, you know, like whenever I'm reporting a post, I choose from all my moderation services who I wanna report this to. what has ended up happening more than being used to actually filter out like subjective stuff is more kind of like either algorithmic systems or what you might call informational. [00:48:21] So the algorithmic ones are like, one of the more popular ones is a thing that's looking for, posts from other social networks. Like this screenshot of a Reddit post or a Twitter post or a Facebook post. Because, which you're kinda like, why, you know, but the thing is some folks just get really tired of seeing screenshots from the other networks. [00:48:40] 'cause often it's like, look what this person said. Can you believe it? You know, it's like, ah. Okay, I've had enough. So one of our users aendra made a moderate service that just runs an ML that detects it, labels it, and then folks that are tired of it, they subscribe to it and they're just hide it, you know? [00:48:57] And so it's like a smart filter kind of thing that they're doing. you know, hypothetically you could do that for things like spiders, you know, like you've got arachniphobia, things like that. that's like a pretty straightforward, kind of automated way of doing it. Which takes a lot of the spice, you know, outta out of running moderation. [00:49:15] So that users have been like, yeah, yeah, okay, we can do that. [00:49:20] Those are user facing ways that we tried to surface the. Decentralized principle, right? And make take advantage of how this whole architecture can have this kind of a pluggability into it. Users can self host now [00:49:33] Paul: But then really at the end of the day, kind of the important core part of it is those pieces we were talking about before, the hosting, the relay and the, the applications themselves, having those be swappable in completely. so we tend to think of those as kind of ranges of infrastructure into application and then into particular client side stuff. [00:49:56] So a lot of folks right now, for instance, they're making their own clients to the application and those clients are able to do customizations, add features, things like that, as you might expect, [00:50:05] but most of them are not running their own backend. They're just using our backend. But at any point, it's right there for you. You know, you can go ahead and, and clone that software and start running the backend. If you wanted to run your own relay, you could go ahead and go all the way to that point. [00:50:19] You know, if you wanna do your own hosting, you can go ahead and do that. Um, it's all there. It's really just kind of a how much effort your project really wants to take. That's the kind of systemically important part. That's the part that makes sure that the overall mission of de monopolizing, social media online, that's where that really gets enforced. [00:50:40] Jeremy: And so someone has their own data repository with their own users and their own relay. they can request that your relay collect the information from their own data repositories. And that's, that's how these connections get made. [00:50:58] Paul: Yeah. And, and we have a fair number of those already. Fair number of, we call those the self hosters right? And we got I wanna say 75 self hoster going right now, which is, you know, love to see that be more, but it's, really the folks that if you're running a service, you probably would end up doing that. [00:51:20] But the folks that are just doing it for themselves, it's kind of the, the nerdiest of the nerds over there doing that. 'cause it doesn't end up showing itself in the, in the application at all. Right? It's totally abstracted away. So it, that, that one's really about like, uh, measure your paranoia kind of thing. [00:51:36] Or if you're just proud of the self-hosting or, or curious, you know, that that's kind of where that sits at the moment. AT Protocol beyond bluesky [00:51:42] Jeremy: We haven't really touched on the fact that there's this underlying protocol and everything we've been discussing has been centered around the bluesky social network where you run your own, instance of the relay and the data repositories with the purpose of talking to bluesky, but the protocol itself is also intended to be used for other uses, right? [00:52:06] Paul: Yeah. It's generic. The data types are set up in a way that anybody can build new data types in the system. there's a couple that have already begun, uh, front page, which is kind of a hacker news clone. There's Smoke Signals, which is a events app. There's Blue Cast, which is like a Twitter spaces, clubhouse kind of thing. [00:52:29] Those are the folks that are kind of willing to trudge into the bleeding edge and deal with some of the rough edges there for pretty I think, obvious reasons. A lot of our work gets focused in on making sure that the bluesky app and that use case is working correctly. [00:52:43] But we are starting to round the corner on getting to a full kind of how to make alternative applications state. If you go to the atproto.com, there's a kind of a introductory tutorial where that actually shows that whole stack and how it's done. So it's getting pretty close. There's a couple of still things that we wanna finish up. [00:53:04] jeremy so in a way you can almost think of it as having an eventually consistent data store on the network, You can make a traditional web application with a relational database, and the source of truth can actually be wherever that data repository is stored on the network. [00:53:24] paul Yeah, that's exactly, it is an eventually consistent system. That's exactly right. The source of truth is there, is their data repo. And that relational database that you might be using, I think the best way to think about it is like secondary indexes or computed indexes, right? They, reflect the source of truth. [00:53:43] Paul: This is getting kind of grandiose. I don't tend to poses in these terms, but it is almost like we're trying to have an OS layer at a protocol level. It's like having your own [00:53:54] Network wide database or network-wide file system, you know, these are the kind of facilities you expect out of a platform like an os And so the hope would be that this ends up getting that usage outside of just the initial social, uh, app, like what we're doing here. [00:54:12] If it doesn't end up working out that way, if this ends up, you know, good for the Twitter style use case, the other one's not so much, and that's fine too. You know, that's, that's our initial goal, but we, we wanted to make sure to build it in a way that like, yeah, there's evolve ability to, it keeps, it, keeps it, make sure that you're getting kinda the most utility you can out of it. Peer-to-peer and the difficulty of federated queries [00:54:30] Jeremy: Yeah, I can see some of the parallels to some of the decentralized stuff that I, I suppose people are still working on, but more on the peer-to-peer side, where the idea was that I can have a network host this data. but, and in this case it's a network of maybe larger providers where they could host a bunch of people's data versus just straight peer to peer where everybody has to have a piece of it. [00:54:57] And it seems like your angle there was really the scalability part. [00:55:02] Paul: It was the scalability part. And there's great work happening in peer-to-peer. There's a lot of advances on it that are still happening. I think really the limiter that you run into is running queries against aggregations of data. Because you can get the network, you know, BitTorrent sort of proved that you can do distributed open horizontal scaling of hosting. [00:55:29] You know, that basic idea of, hey, everybody's got a piece and you sync it from all these different places. We know you can do things like that. What nobody's been able to really get into a good place is running, queries across large data sets. In the model like that, there's been some research in what is, what's called federated queries, which is where you're sending a query to multiple different nodes and asking them to fulfill as much of it as they can and then collating the results back. But it didn't work that well. That's still kind of an open question and until that is in a place where it can like reliably work and at very large scales, you're just gonna need a big database somewhere that does give the properties that you need. You need these big indexes. And once we were pretty sure of that requirement, then from there you start asking, all right, what else about the system [00:56:29] Could we make easier if we just apply some more traditional techniques and merge that in with the peer-to-peer ideas? And so key hosting, that's an obvious one. You know, availability, let's just have a server. It's no big deal. But you're trying to, you're trying to make as much of them dumb as possible. [00:56:47] So that they have that easy replaceability. Moderation challenges [00:56:51] Jeremy: Earlier you were talking a a little bit about the moderation tools that people could build themselves. There was some process where people could label posts and then build their own software to determine what a feed should show per a person. [00:57:07] Paul: Mm-Hmm [00:57:07] Jeremy: But, but I think before that layer for the platform itself, there's a base level of moderation that has to happen. [00:57:19] Paul: yeah. [00:57:20] Jeremy: And I wonder if you could speak to, as the app has grown, how that's handled. [00:57:26] Paul: Yeah. the, you gotta take some requirements in moderation pretty seriously to start. And with decentralization. It sometimes that gets a little bit dropped. You need to have systems that can deal with questions about CSAM. So you got those big questions you gotta answer and then you got stuff that's more in the line of like, alright, what makes a good platform? [00:57:54] What kind of guarantees are we trying to give there? So just not legal concerns, but you know, good product experience concerns. That's something we're in the realm of like spam and and abusive behavior and things like that. And then you get into even more fine grain of like what is a person's subjective preference and how can they kind of make their thing better? [00:58:15] And so you get a kind of a telescoping level of concerns from the really big, the legal sort of concerns. And then the really small subjective preference kind of concerns. And that actually that telescoping maps really closely to the design of the system as well. Where the further you get up in the kind of the, in that legal concern territory, you're now in core infrastructure. [00:58:39] And then you go from infrastructure, which is the relay down into the application, which is kind of a platform and then down into the client. And that's where we're having those labelers apply. And each of them, as you kind of move closer to infrastructure, the importance of the decision gets bigger too. [00:58:56] So you're trying to do just legal concerns with the relay right? Stuff that you objectively can, everybody's in agreement like Yeah, yeah, yeah. You know, no bigs don't include that. The reason is that at the relay level, you're anybody that's using your relay, they depend on the decisions you're making, that sort of selection you're doing, any filtering you're doing, they don't get a choice after that. [00:59:19] So you wanna try to keep that focus really on legal concerns and doing that well. so that applications that are downstream of it can, can make their choices. The applications themselves, you know, somebody can run a parallel I guess you could call it like a parallel platform, so we got bluesky doing the microblogging use case, other people can make an application doing the microblogging use case. So there's, there's choice that users can easily switch, easily enough switch between, it's still a big choice. [00:59:50] So we're operating that in many ways. Like any other app nowadays might do it. You've got policies, you know, for what's acceptable on the network. you're still trying to keep that to be as, you know, objective as possible, make it fair, things like that. You want folks to trust your T&S team. Uh, but from the kind of systemic decentralization question, you get to be a little bit more opinionated. [01:00:13] Down all the way into the client with that labeling system where you can, you know, this is individuals turning on and off preferences. You can be as opinionated as you want on that letter. And that's how we have basically approached this. And in a lot of ways, it really just comes down to, in the day to day, you're the moderation, the volume of moderation tasks is huge. [01:00:40] You don't actually have high stakes moderation decisions most of the time. Most of 'em are you know pretty straightforward. Shouldn't have done that. That's gotta go. You get a couple every once in a while that are a little spicier or a policy that's a little spicier. And it probably feels pretty common to end users, but that just speaks to how much moderation challenges how the volume of reports and problems that come through. [01:01:12] And we don't wanna make it so that the system is seized up, trying to decentralize itself. You know, it needs to be able to operate day to day. What you wanna make is, you know, back pressure, you know, uh, checks on that power so that if an application or a platform does really start to go down the wrong direction on moderation, then people can have this credible exit. [01:01:36] This way of saying, you know what, that's a problem. We're moving from here. And somebody else can come in with different policies that better fit people's people's expectations about what should be done at, at these levels. So yeah, it's not about taking away authority, it's about checking authority, you know, kind of a checks and balances mentality. [01:01:56] Jeremy: And high level, 'cause you saying how there's such a high volume of, of things that you know what it is, you'd know you wanna remove it, but there's just so much of it. So is there, do you have automated tools to label these things? Do you have a team of moderators? Do they have to understand all the different languages that are coming through your network? [01:02:20] Yes, yes, yes and yes. Yeah. You use every tool at your disposal to, to stay on top of it. cause you're trying to move as fast as you can, folks. The problems showing up, you know, the slower you are to respond to it, the, the more irritating it is to folks. Likewise, if you make a, a missed call, if somebody misunderstands what's happening, which believe me, is sometimes just figuring out what the heck is going on is hard. [01:02:52] Paul: People's beefs definitely surface up to the moderation misunderstanding or wrong application. Moderators make mistakes so you're trying to maintain a pretty quick turnaround on this stuff. That's tough. And you, especially when to move fast on some really upsetting content that can make its way through, again, illegal stuff, for instance, but more videos, stuff like that, you know, it's a real problem. [01:03:20] So yeah, you're gotta be using some automated systems as well. Clamping down on bot rings and spam. You know, you can imagine that's gotten a lot harder thanks to LLMs just doing text analysis by dumb statistics of what they're talking about that doesn't even work anymore. [01:03:41] 'cause the, the LLMs are capable of producing consistently varied responses while still achieving the same goal of plugging a online betting site of some kind, you know? So we do use kind of dumb heuristic systems for when it works, but boy, that won't work for much longer. [01:04:03] And we've already got cases where it's, oh boy, so the moderation's in a dynamic place to say the least right now with, with LLMs coming in, it was tough before and
Přestože oba zakladatelé monitorovací a debuggovací platformy Better Stack pocházejí ze Slovenska, jejich pražský startup je mnohými odborníky považován za jednu z vůbec nejperspektivnějších českých firem. Získali už dvě obří investice v celkové výši 28 600 000 $ a míří ještě mnohem výše
Deep Dive into Databases with Peter Zaitsev | The GeekNarrator Podcast Join host Kaivalya Apte and special guest Peter Zaitsev from Percona on this episode of the Geeknerder Podcast. They discuss Peter's fascinating journey into the world of databases, founding Percona, and the evolution of open source database solutions. Topics include the rise of PostgreSQL, the comparison between MySQL and PostgreSQL, database observability, the impact of cloud and Kubernetes on database management, licensing changes in popular databases like Redis, and career advice for database administrators and developers. Stay tuned for insights on the future of databases, observability strategies, and the role of AI in database management. 00:00 Introduction and Guest Welcome 00:14 Peter's Journey into Databases 04:15 The Rise of PostgreSQL vs MySQL 18:17 Challenges in Managing Database Clusters 24:36 Common Developer Mistakes with Databases 30:59 MongoDB's Success and Future 34:53 Redis and Licensing Changes 37:07 Elastic's License Change and Its Impact 38:25 Redis Fork and Industry Collaboration 40:27 Kubernetes and Cloud-Native Databases 47:47 Challenges in Database Upgrades and Migrations 54:58 Load Testing and Observability 01:09:02 Future of Database Administration and Development 01:15:13 Conclusion and Final Thoughts Become a member of The GeekNarrator to get access to member only videos, notes and monthly 1:1 with me. Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!
The GeekNarrator memberships can be joined here: https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/join Membership will get you access to member only videos, exclusive notes and monthly 1:1 with me. Here you can see all the member only videos: https://www.youtube.com/playlist?list=UUMO_mGuY4g0mggeUGM6V1osdA ------------------------------------------------------------------------------------------------------------------------------------------------------------------ About this episode: ------------------------------------------------------------------------------------------------------------------------------------------------------------------ In this episode we are talking to Peter and Qian, co-founders of DBOS. The conversation covers the challenges of creating fault-tolerant applications, the architecture of DBOS, and how it addresses reliability at multiple layers. Chapters: 00:00 Introduction to the Geeknerder Podcast 00:29 Meet the Co-Founders of DBOSS 01:25 The Core Problem: Building Reliable Systems 02:05 How DBOSS Solves Reliability Issues 04:29 Understanding DBOSS Architecture 06:09 Deep Dive into DBOSS Library 08:36 Postgres and State Management 18:31 Handling Parallel Steps and Performance Concerns 26:00 Observability and Version Control 30:18 Running Multiple Code Versions 30:58 Managing Workflow Versions 32:03 Surgery on Workflow States 33:15 Library Annotations and Durable Execution 34:24 Migrating to the Cloud Version 37:23 Handling Email Workflows 42:41 Transactional Guarantees with Postgres 48:44 Technical Challenges and Multi-Tenancy 54:12 Real-World Use Cases and Benefits 59:45 Conclusion and Final Thoughts Some important links: - Main website: https://www.dbos.dev/ - DBOS docs: https://docs.dbos.dev/ - Open-source DBOS Transact libraries: - Python: https://github.com/dbos-inc/dbos-transact-py - TypeScript: https://github.com/dbos-inc/dbos-transact-ts ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Like building real stuff? ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription. https://app.codecrafters.io/join?via=geeknarrator ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Link to other playlists. LIKE, SHARE and SUBSCRIBE ------------------------------------------------------------------------------------------------------------------------------------------------------------------ If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!
Music from: Kindred Spirits, Painted Trillium, Old Goat Skiffle Band, Minstrel_rav'n, Merry Measure, LandLoch'd, Joni Minstrel, Vana Mazi, Haggis Rampant, Brobdignagian Bards, Dan The Bard, Rambling Sailors, Sea Dog Slams Poems, Black Oak Shillelagh, Flying Fish Sailors, Gallows Humor, Chuckawalla Rhythm Kings, Empty Hats, Ceann, Darren Raleigh, New Minstrel Revue, Sam Kesler, Abby Green, and 2 Merry Men VISIT OUR SPONSORS The Louisiana Renaissance Festival https://www.larf.org/ The Ren List http://www.therenlist.com/ Happy To Be Coloring Pages https://happytobecoloring.justonemore.website/ RESCU https://rescu.org/ The Patrons of the Podcast https://www.patreon.com/RenFestPodcast SONGS Daphne[1] performed by Kindred Spirits from the album Dispelling All Woes www.thekindredspirits.com The Fox[7] performed by Painted Trillium from the album Painted Trillium www.paintedtrillium.com Sailors Prayer performed by Old Goat Skiffle Band from the album Just Kidding www.facebook.com/OldGoatSkiffleBand Sea Elf performed by Minstrel_rav'n from the album Tales from the Sea www.facebook.com/TheMinstrelRavn/ I Wish They'd Do It Now[2] performed by Merry Measure from the album Faire Warning Mist-Covered Mountains[4] performed by LandLoch'd from the album Whiskey'd Out to Sea www.landlochd.com Whiskey and Beer performed by Joni Minstrel from the album Joni Minstrel Kicks the King www.store.cdbaby.com/Artist/JoniMinstrel Sandansko performed by Vana Mazi from the album Izam Anav www.vanamazi.com Gravel Path, First Slip, Poisoned Dwarf, and Back in Black performed by Haggis Rampant from the album Trì www.haggisrampant.com Lish Young Buy-a-Broom performed by Brobdignagian Bards from the album Songs of Ireland www.thebards.net You Know Nothing, Jahn Snough performed by Dan The Bard from the album Cross-Genre References! www.danthebard.com One More Day[3] performed by Rambling Sailors from the album Wanderlust www.facebook.com/RamblingSailors Raising Black Flags[2] performed by Sea Dog Slams Poems from the album The Call www.facebook.com/seadogslam/ Jennie Mccall performed by Black Oak Shillelagh from the album Och, The Places We've Been www.blackoakshillelagh.com Ringa Dinga Da[2] performed by Flying Fish Sailors from the album Remnant Stew www.flyingfishsailors.coM Anne Boleyn[1] performed by Gallows Humor from the album Necrodancer www.gallowshumorband.com/ The King Of Ballyhooley performed by Chuckawalla Rhythm Kings from the album No Bedrolls Or Backpacks www.chuckawallark.com Raggle Taggle Gypsies[6] performed by Empty Hats from the album Released www.emptyhats.com Raggle Taggle Gypsies[1] performed by Ceann from the album Last Ones Standing www.reverbnation.com/ceann The Summer Before the War performed by Darren Raleigh from the album Silverwheel www.darrenraleigh.com/ Tarantella[2] performed by New Minstrel Revue from the album Many Hands Humors of Whiskey[5] performed by Sam Kesler from the album The Toasted Clover www.samkesler.bandcamp.com/ Red Is the Rose[1] performed by Abby Green from the album Why Should I AbbyGreen.com Mingulay Boat Song[1] performed by 2 Merry Men from the album Bawdy Drunken Song-Filled Merriment www.facebook.com/2MerryMen/ HOW TO CONTACT US Please post it on Facebook https://www.facebook.com/renfestmusic Please email us at renfestpodcast@gmail.com HOW TO LISTEN Patreon https://www.patreon.com/RenFestPodcast Apple https://podcasts.apple.com/us/podcast/renaissance-festival-podcast/id74073024 Spotify https://open.spotify.com/show/76uzuG0lRulhdjDCeufK15?si=obnUk_sUQnyzvvs3E_MV1g Listennotes http://www.listennotes.com/podcasts/renaissance-festival-podcast-minions-1Xd3YjQ7fWx/
I am excited to introduce my conversation with Ash Vijay, SVP of Cloud Sales and BD Partnerships at Redis, live from Google Cloud's Marketplace Exchange! Redis, a favorite among developers, plays a crucial role in the data ecosystem, powering applications with high-performance caching and database solutions integral to GenAI. In this episode, Ash shares how Redis is leveraging Google Cloud Marketplace to meet customer demand for faster procurement and scaling, especially in light of large cloud commitments. Redis's marketplace strategy not only retains customers but also expands deal sizes and accelerates closing times. Ash discusses Redis's comprehensive partner program, enabling ISVs, OEMs, and SIs to build on the Redis platform, offering them both growth opportunities and collaborative engagement with Redis's ecosystem. Ash's advice to partners? Invest in the marketplace and empower your sales teams to maximize its potential—a strategy that's driving Redis's top-down engagements and delivering significant business value. Tune into The Ultimate Guide to Partnering to learn how Redis is redefining co-selling and scaling with Google Cloud!
In this episode, Ann Schlemmer, CEO of Percona, discusses the company's 18-year journey rooted in open source principles, customer-centric approaches, and performance enhancements. She describes Percona's solutions for major databases like MySQL, MongoDB, Postgres, and their recent venture into the Redis space with the Valkey project. Anne emphasizes the importance of offering open source alternatives to proprietary licenses, maintaining trust and value with customers, and the need for businesses to contribute to and sustain the open source community. She also touches on the future of open source business models, the impact of security considerations, and how the open source ecosystem can adapt and evolve in the coming years. 00:00 Introduction and Welcome 00:14 Overview of Percona 01:33 Importance of Open Source Alternatives 02:45 Challenges in the Open Source Database Landscape 06:43 Percona's Community Contributions 09:01 Sustainability and Governance in Open Source 15:36 Future of Open Source Business Models 19:55 Unexpected Uses and Innovations
Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. At All Things Open 2024 in Raleigh, AWS's Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.Looking ahead, Valkey plans two annual updates, with the next release expected in 2025. New modules are anticipated, including a JSON module for efficient data manipulation and a Bloom filter for probabilistic data presence checks. Version 9.0 may bring substantial changes to clustering, updating it to better leverage modern technologies. The Valkey project aims to continue evolving its capabilities to meet the demands of advanced data storage needs.Learn more from The New Stack about Valkey: Valkey Is a Different Kind of Fork AWS Adds Support, Drops Prices, for Redis-Forked Valkey Valkey: A Redis Fork With a Future Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
From Nextcloud Breakup to Blissful Reunion: Chris's journey back to a smarter setup. Plus, Jellyfin's game-changing features and a beloved self-hosted app get the upgrade we've all been waiting for.
В этом выпуске мы обсуждаем уникальный проект по запуску Linux на микропроцессоре Intel 4004, странные девайсы и сравниваем Redis и SQLite. [00:04:38] Чему мы научились The First Gaming Keyboard How to steal a game idea [00:22:52] Как запустить Linux за 4.75 дня [01:05:57] Неожиданно диски быстрее памяти aka Redis vs SQLite Rearchitecting: Redis to SQLite |… Читать далее →
In this episode of the Modern Web Podcast, Danny Thompson, Director of Technology at This Dot Labs, hosts a conversation with Theresa Mammarella, JVM engineer at IBM, and Brian Sam-Bodden, Applied AI Engineer at Redis. They explore their talks at JCONF in Dallas, Texas, covering topics like GenAI architectures in the Java community and OpenJDK's Project Valhalla. Their conversation covers Java's evolution, AI applications, semantic caching, and how these technologies are impacting development workflows and performance optimization. Chapters 00:00 - Introduction 01:00 - Brian on GenAI in the Java Community 01:47 - Java's Safe Evolution Path 02:17 - Teresa on Project Valhalla 03:54 - Value Classes and Performance 04:33 - Brian on Semantic Caching 06:54 - Challenges of Rewording Prompts 09:15 - What is RAG Architecture? 11:34 - Java's Role in AI 13:57 - Cost of LLMs and Caching Strategies 15:57 - Teresa on Java's Future 18:22 - Learning Resources for Java Developers 20:44 - Addressing Misconceptions About Java 22:39 - Final Thoughts Follow Theresa Mammarella & Brian Sam on Social Media Theresa Mammarella Twitter: https://x.com/t_mammarella?lang=en Brian Sam-Bodden Twitter: https://x.com/bsbodden Theresa Mammarella Linkedin: https://www.linkedin.com/in/tmammarella/ Brian Sam-Bodden Linkedin: https://www.linkedin.com/in/sambodden/
Link to blog post This week's Cyber Security Headlines – Week in Review is hosted by Rich Stroffolino with guest Quincy Castro, CISO, Redis. Thanks to our show sponsor, Vanta As third-party breaches continue to rise, companies are increasingly vigilant, which means more time spent on manual security reviews. With Vanta Questionnaire Automation, security & compliance teams can complete security reviews up to 5 times faster, giving you time back to focus on running your security & compliance programs. Over 8,000 global companies like ZoomInfo, SmartRecruiters and Noibu use Vanta to save time on security reviews. Visit vanta.com to learn more about Questionnaire Automation. All links and the video of this episode can be found on CISO Series.com
2024-09-24 Weekly News — Episode 219Watch the video version on YouTube at https://youtube.com/live/DBqxto5X7iE?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsDaniel Garcia - Senior Developer at Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support (supercali)We have 58 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsColdFusion 2023 Security Update 10, ColdFusion 2021 Security Update 16Release Date: September 10, 2024Adobe Product Security Bulletin APSB24-71 fixes one critical vulnerability.Vulnerabilities FixedCVE-2024-41874 - critical (9.8) Deserialization of Untrusted Data vulnerability allowing for arbitrary code executionLinks & Resources- APSB24-71 - Adobe Product Security Bulletin https://helpx.adobe.com/security/products/coldfusion/apsb24-71.html - CF2023 Update 10 - Adobe KB article for ColdFusion 2023 Update 10 https://helpx.adobe.com/coldfusion/kb/coldfusion-2023-update-10.html - CF2021 Update 16 - Adobe KB article for ColdFusion 2021 Update 16https://helpx.adobe.com/coldfusion/kb/coldfusion-2021-update-16.html - Forum Thread - Adobe ColdFusion forum thread discussing ColdFusion 2023 Update 10 and CF 2021 Update 16.https://community.adobe.com/t5/coldfusion-discussions/now-live-adobe-coldfusion-2023-and-2021-september-2024-security-updates/td-p/14852233 Notes / IssuesNo updates to connector or packages in this release. Fixed bug CF-4223435 caused by previous update.https://tracker.adobe.com/#/view/CF-4223435 CFCamp - Save the DateMay 22, 23rd - 2025Atomis Hotel Munich AirportInto the Box 2025 - Call for Speakers Still Open - Closing soon Nov 1stWe're excited to invite you to share your knowledge and insights at Into the Box 2025: The Future is Dynamic for Modern Web Developers!We're seeking speakers eager to discuss their latest projects, share best practices, or explore new trends in web development. If you have valuable insights, we'd love to hear from you!Submit your topics using the form below. The deadline is midnight on November 1, 2024. We encourage everyone to submit up-to-date and trendy topics like Modern CFML tools, BoxLang, AI, Frameworks, Open Source Libraries, Modern Web Development Trends, Cybersecurity, etc. Feel free to share all your ideas!https://www.ortussolutions.com/blog/call-for-speakers-into-the-box-2025 New Releases and UpdatesFusionReactor 12.1: Expanding Horizons with Java 21 Support and Enhanced Cloud IntegrationWe're thrilled to announce the release of FusionReactor 12.1, our latest update that brings exciting new features to enhance your application performance monitoring experience. This release focuses on expanding language support, improving cloud integration, and streamlining user access. Let's dive into the key features that make FusionReactor 12.1 a game-changer for developers and operations teams alike.https://fusion-reactor.com/blog/fusionreactor-12-1-expanding-horizons-with-java-21-support-and-enhanced-cloud-integration/ BoxLang Betas — Beta 11-158/23/24 - BoxLang 1.0.0 Beta 11 Launched9 New Features9 Improvements8 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-11-launched 8/30/24 - BoxLang 1.0.0 Beta 12 Launched 5 New Features8 Improvements5 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-12-launched 9/6/24 - Blog - Ortus Solutions - BoxLang 1.0.0 Beta 13 Launched4 New Features5 Improvements20 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-13-launched 9/13/24 - Blog - Ortus Solutions - BoxLang 1.0.0 Beta 14 Launched6 New Features4 Improvements16 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-14-launched 9/19/24 - Blog - Ortus Solutions - BoxLang 1.0.0 Beta 15 Launched5 New Features10 Improvements22 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-15-launched Webinars, Meetups and WorkshopsICYMI - Online ColdFusion Meetup - "Options for running Redis (Valkey/etc), locally or as-a-service", w/ Charlie ArehartYou may be considering use of Redis (or alternatives like Valkey), whether for caching or as a document store/db or as an alternative repository for sessions (such as is an option for ColdFusion sessions since CF2016). But before you can USE Redis you need to have some IMPLEMENTATION of it. In this session, veteran server troubleshooter Charlie Arehart will review different approaches for implementing Redis (and/or Valkey, and other plug-compatible replacements): with most approaches being free, while some come at a...
Mahmoud Mousa releases Sidekick, a tool for hosting side projects on a cheap VPS, Ryan Dahl, has had enough of Oracle bogarting "JavaScript" but not even using it, Thomas Rampelberg's kty is a sweet terminal for Kubernetes, Redis users are considering alternatives after their relicense & a bunch of smart JS folks wrote up nine Node.js pillars.
Mahmoud Mousa releases Sidekick, a tool for hosting side projects on a cheap VPS, Ryan Dahl, has had enough of Oracle bogarting "JavaScript" but not even using it, Thomas Rampelberg's kty is a sweet terminal for Kubernetes, Redis users are considering alternatives after their relicense & a bunch of smart JS folks wrote up nine Node.js pillars.
David Neal, developer advocate and Asana content creator, discusses his talk, The Illustrated Guide to Node.js. David shares insights from his 10-year journey with Node.js, discussing its origins, use cases, and why it remains a vital tool for developers, giving insights into JavaScript's evolution and practical tips for navigating the Node.js ecosystem. Links https://reverentgeek.com https://twitter.com/reverentgeek https://techhub.social/@reverentgeek https://staging.bsky.app/profile/reverentgeek.com https://www.threads.net/@reverentgeek https://github.com/reverentgeek https://www.youtube.com/ReverentGeek https://www.linkedin.com/in/davidneal We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: David Neal.
Redis is an in-memory database that can be used for caching, vector search, and as a message broker. Brian Sam-Bodden is a Senior Applied AI Engineer at Redis. He joins the show to talk about his work and AI at the company. Sean's been an academic, startup founder, and Googler. He has published works covering The post AI at Redis with Brian Sam-Bodden appeared first on Software Engineering Daily.
The US scrutinizes Chinese telecoms. Indonesia's national datacenter is hit with ransomware. RedJulliett targets organizations in Taiwan. Researchers can tell where you are going by how fast you get there. A previously dormant botnet targeting Redis servers becomes active. Thousands of customers may have had info compromised in an attack on Levi's. A new industry alliance hopes to prevent memory-based cyberattacks. Guest Seeyew Mo, Assistant National Cyber Director, Office of the National Cyber Director at the White House, shares the nuances of the White House's skills-based approach with N2K President Simone Petrella. Assange agrees to a plea deal. Our 2024 N2K CyberWire Audience Survey is underway, make your voice heard and get in the running for a $100 Amazon gift card. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On our Solution Spotlight, guest Seeyew Mo, Assistant National Cyber Director, Office of the National Cyber Director at the White House, shares the nuances of the White House's skills-based approach (and how it's not only about hiring) with N2K President Simone Petrella. Seeyew shares a progress report on the National Cyber Workforce and Education Strategy nearly one year out. For more information, you can visit the press release: National Cyber Director Encourages Adoption of Skill-Based Hiring to Connect Americans to Good-Paying Cyber Jobs. The progress report Seeyew and Simone discuss can be found here: National Cyber Workforce and Education Strategy: Initial Stages of Implementation. Selected Reading Exclusive: US probing China Telecom, China Mobile over internet, cloud risks (Reuters) Indonesian government datacenter locked down in $8M ransomware rumble (The Register) Taiwanese tech firms, universities, religious groups among targets in cyber-espionage campaign (The Record) New security loophole allows spying on internet users' online activity (HelpNet Security) P2PInfect botnet targets REdis servers with new ransomware module (Bleeping Computer) Credential Stuffing Attack Hits 72,000 Levi's Accounts (Infosecurity Magazine) CHERI Backers Form Alliance to Promote Memory Safety Chip (GovInfo Security) Julian Assange, WikiLeaks Founder, Agrees to Plead Guilty in Deal With U.S. (The New York Times) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices