Podcasts about Crud

  • 306PODCASTS
  • 496EPISODES
  • 52mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Sep 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Crud

Latest podcast episodes about Crud

Kodsnack in English
Kodsnack 658 - Failure of ergonomics, with Taylor Troesh

Kodsnack in English

Play Episode Listen Later Sep 2, 2025 46:14


Fredrik talks to Taylor Troesh about packaging things, generating code, and database evolution. Why is it so hard to package and build things? Is it a failure of ergonomics? Is there hope for a change? We also discuss generating code using LLMs, and Taylor presents the workflow of using them to generate projects from scratch, starting over if more fundamental changes are needed. After that, we dig into databases and SQL, and Taylor has many thoughts and opinions about how they can be used and might evolve. Finally, we discuss other interesting projects, keeping track of ideas, what the OPTC is, and why should you cut down a palm tree? Recorded during Øredev 2024. The episode is sponsored by Ellipsis - let us edit your podcast and make it sound just as good as Kodsnack! With more than ten years and 1200 episodes of experience, Ellipsis gets your podcast edited, chapterized, and described with all related links in a prompt and professional manner. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Taylor Taylor’s keyboard-rich desk setup Taylor’s Øredev 2024 talk: How to flatpack programs The IKEA hacking community (or one of them) James Mickens Redux The flux architecture Jquery Toki pona APL Zig SNOBOL Actor model Jq Lisp Scrapscript - Taylor’s own language HTMX CRUD Elm Support us on Ko-fi Cursor Neovim Avante - a Cursor alternative for Neovim Sam Altman Sam Colt Sam Morse Postgresql Connecting directly to the database - Svante Richter’s talk Supabase SQL Some of Taylor’s writings about SQL PRQL - Pipelined relational query language FQL Regex Foundationdb Ellipsis - sponsor of the week: we edit Kodsnack, and we can edit your podcast too! Offensive horticulture A history of microwave ovens Scrapsheets Game of life Trailer buses Follow-up links, thanks to unvisual: Bruck - “a type of bus or coach built to combine goods and passenger transport” Skvader - a Swedish bruck The timeless way of software - Taylor talks about Christopher Alexander, just like we did in episode 657! Titles Nothing besides IKEA I did not besmirch the reputation How strange we package things I don’t think I have any advice Failure of ergonomics I do have hope Drinking from the well Brainless CRUD-stuff (I have) No qualms with Elm During the binges Fifteen math professors Tilting against palmtrees OPTC

Igreja Missionária Evangélica Maranata
O céu é para todos! - Patrícia Crud

Igreja Missionária Evangélica Maranata

Play Episode Listen Later Aug 24, 2025 52:50


O céu é para todos! - Patrícia Crud by Igreja Missionária Evangélica Maranata do RecreioPara conhecer mais sobre a Maranata: Instagram: https://www.instagram.com/imemaranata/Facebook: https://www.facebook.com/imemaranataSite: https://www.igrejamaranata.com.br/Canal do youtube: https://www.youtube.com/channel/UCa1jcJx-DIDqu_gknjlWOrQDeus te abençoe

The Group Chat
#122 - holy crud i found the password...

The Group Chat

Play Episode Listen Later Aug 15, 2025 159:30


Did we cover everything? Maybe. Was it all in order? No. Were we all over the place? Yes! Awesome. | VISUAL PODCAST - "THE GROUP CHAT"

Les Cast Codeurs Podcast
LCC 329 - L'IA, ce super stagiaire qui nous fait travailler plus

Les Cast Codeurs Podcast

Play Episode Listen Later Aug 14, 2025 120:24


Arnaud et Guillaume explore l'évolution de l'écosystème Java avec Java 25, Spring Boot et Quarkus, ainsi que les dernières tendances en intelligence artificielle avec les nouveaux modèles comme Grok 4 et Claude Code. Les animateurs font également le point sur l'infrastructure cloud, les défis MCP et CLI, tout en discutant de l'impact de l'IA sur la productivité des développeurs et la gestion de la dette technique. Enregistré le 8 août 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–329.mp3 ou en vidéo sur YouTube. News Langages Java 25: JEP 515 : Profilage de méthode en avance (Ahead-of-Time) https://openjdk.org/jeps/515 Le JEP 515 a pour but d'améliorer le temps de démarrage et de chauffe des applications Java. L'idée est de collecter les profils d'exécution des méthodes lors d'une exécution antérieure, puis de les rendre immédiatement disponibles au démarrage de la machine virtuelle. Cela permet au compilateur JIT de générer du code natif dès le début, sans avoir à attendre que l'application soit en cours d'exécution. Ce changement ne nécessite aucune modification du code des applications, des bibliothèques ou des frameworks. L'intégration se fait via les commandes de création de cache AOT existantes. Voir aussi https://openjdk.org/jeps/483 et https://openjdk.org/jeps/514 Java 25: JEP 518 : Échantillonnage coopératif JFR https://openjdk.org/jeps/518 Le JEP 518 a pour objectif d'améliorer la stabilité et l'évolutivité de la fonction JDK Flight Recorder (JFR) pour le profilage d'exécution. Le mécanisme d'échantillonnage des piles d'appels de threads Java est retravaillé pour s'exécuter uniquement à des safepoints, ce qui réduit les risques d'instabilité. Le nouveau modèle permet un parcours de pile plus sûr, notamment avec le garbage collector ZGC, et un échantillonnage plus efficace qui prend en charge le parcours de pile concurrent. Le JEP ajoute un nouvel événement, SafepointLatency, qui enregistre le temps nécessaire à un thread pour atteindre un safepoint. L'approche rend le processus d'échantillonnage plus léger et plus rapide, car le travail de création de traces de pile est délégué au thread cible lui-même. Librairies Spring Boot 4 M1 https://spring.io/blog/2025/07/24/spring-boot–4–0–0-M1-available-now Spring Boot 4.0.0-M1 met à jour de nombreuses dépendances internes et externes pour améliorer la stabilité et la compatibilité. Les types annotés avec @ConfigurationProperties peuvent maintenant référencer des types situés dans des modules externes grâce à @ConfigurationPropertiesSource. Le support de l'information sur la validité des certificats SSL a été simplifié, supprimant l'état WILL_EXPIRE_SOON au profit de VALID. L'auto-configuration des métriques Micrometer supporte désormais l'annotation @MeterTag sur les méthodes annotées @Counted et @Timed, avec évaluation via SpEL. Le support de @ServiceConnection pour MongoDB inclut désormais l'intégration avec MongoDBAtlasLocalContainer de Testcontainers. Certaines fonctionnalités et API ont été dépréciées, avec des recommandations pour migrer les points de terminaison personnalisés vers les versions Spring Boot 2. Les versions milestones et release candidates sont maintenant publiées sur Maven Central, en plus du repository Spring traditionnel. Un guide de migration a été publié pour faciliter la transition depuis Spring Boot 3.5 vers la version 4.0.0-M1. Passage de Spring Boot à Quarkus : retour d'expérience https://blog.stackademic.com/we-switched-from-spring-boot-to-quarkus-heres-the-ugly-truth-c8a91c2b8c53 Une équipe a migré une application Java de Spring Boot vers Quarkus pour gagner en performances et réduire la consommation mémoire. L'objectif était aussi d'optimiser l'application pour le cloud natif. La migration a été plus complexe que prévu, notamment à cause de l'incompatibilité avec certaines bibliothèques et d'un écosystème Quarkus moins mature. Il a fallu revoir du code et abandonner certaines fonctionnalités spécifiques à Spring Boot. Les gains en performances et en mémoire sont réels, mais la migration demande un vrai effort d'adaptation. La communauté Quarkus progresse, mais le support reste limité comparé à Spring Boot. Conclusion : Quarkus est intéressant pour les nouveaux projets ou ceux prêts à être réécrits, mais la migration d'un projet existant est un vrai défi. LangChain4j 1.2.0 : Nouvelles fonctionnalités et améliorations https://github.com/langchain4j/langchain4j/releases/tag/1.2.0 Modules stables : Les modules langchain4j-anthropic, langchain4j-azure-open-ai, langchain4j-bedrock, langchain4j-google-ai-gemini, langchain4j-mistral-ai et langchain4j-ollama sont désormais en version stable 1.2.0. Modules expérimentaux : La plupart des autres modules de LangChain4j sont en version 1.2.0-beta8 et restent expérimentaux/instables. BOM mis à jour : Le langchain4j-bom a été mis à jour en version 1.2.0, incluant les dernières versions de tous les modules. Principales améliorations : Support du raisonnement/pensée dans les modèles. Appels d'outils partiels en streaming. Option MCP pour exposer automatiquement les ressources en tant qu'outils. OpenAI : possibilité de définir des paramètres de requête personnalisés et d'accéder aux réponses HTTP brutes et aux événements SSE. Améliorations de la gestion des erreurs et de la documentation. Filtering Metadata Infinispan ! (cc Katia( Et 1.3.0 est déjà disponible https://github.com/langchain4j/langchain4j/releases/tag/1.3.0 2 nouveaux modules expérimentaux, langchain4j-agentic et langchain4j-agentic-a2a qui introduisent un ensemble d'abstractions et d'utilitaires pour construire des applications agentiques Infrastructure Cette fois c'est vraiment l'année de Linux sur le desktop ! https://www.lesnumeriques.com/informatique/c-est-enfin-arrive-linux-depasse-un-seuil-historique-que-microsoft-pensait-intouchable-n239977.html Linux a franchi la barre des 5% aux USA Cette progression s'explique en grande partie par l'essor des systèmes basés sur Linux dans les environnements professionnels, les serveurs, et certains usages grand public. Microsoft, longtemps dominant avec Windows, voyait ce seuil comme difficilement atteignable à court terme. Le succès de Linux est également alimenté par la popularité croissante des distributions open source, plus légères, personnalisables et adaptées à des usages variés. Le cloud, l'IoT, et les infrastructures de serveurs utilisent massivement Linux, ce qui contribue à cette augmentation globale. Ce basculement symbolique marque un changement d'équilibre dans l'écosystème des systèmes d'exploitation. Toutefois, Windows conserve encore une forte présence dans certains segments, notamment chez les particuliers et dans les entreprises classiques. Cette évolution témoigne du dynamisme et de la maturité croissante des solutions Linux, devenues des alternatives crédibles et robustes face aux offres propriétaires. Cloud Cloudflare 1.1.1.1 s'en va pendant une heure d'internet https://blog.cloudflare.com/cloudflare–1–1–1–1-incident-on-july–14–2025/ Le 14 juillet 2025, le service DNS public Cloudflare 1.1.1.1 a subi une panne majeure de 62 minutes, rendant le service indisponible pour la majorité des utilisateurs mondiaux. Cette panne a aussi causé une dégradation intermittente du service Gateway DNS. L'incident est survenu suite à une mise à jour de la topologie des services Cloudflare qui a activé une erreur de configuration introduite en juin 2025. Cette erreur faisait que les préfixes destinés au service 1.1.1.1 ont été accidentellement inclus dans un nouveau service de localisation des données (Data Localization Suite), ce qui a perturbé le routage anycast. Le résultat a été une incapacité pour les utilisateurs à résoudre les noms de domaine via 1.1.1.1, rendant la plupart des services Internet inaccessibles pour eux. Ce n'était pas le résultat d'une attaque ou d'un problème BGP, mais une erreur interne de configuration. Cloudflare a rapidement identifié la cause, corrigé la configuration et mis en place des mesures pour prévenir ce type d'incident à l'avenir. Le service est revenu à la normale après environ une heure d'indisponibilité. L'incident souligne la complexité et la sensibilité des infrastructures anycast et la nécessité d'une gestion rigoureuse des configurations réseau. Web L'évolution des bonnes pratiques de Node.js https://kashw1n.com/blog/nodejs–2025/ Évolution de Node.js en 2025 : Le développement se tourne vers les standards du web, avec moins de dépendances externes et une meilleure expérience pour les développeurs. ES Modules (ESM) par défaut : Remplacement de CommonJS pour un meilleur outillage et une standardisation avec le web. Utilisation du préfixe node: pour les modules natifs afin d'éviter les conflits. API web intégrées : fetch, AbortController, et AbortSignal sont maintenant natifs, réduisant le besoin de librairies comme axios. Runner de test intégré : Plus besoin de Jest ou Mocha pour la plupart des cas. Inclut un mode “watch” et des rapports de couverture. Patterns asynchrones avancés : Utilisation plus poussée de async/await avec Promise.all() pour le parallélisme et les AsyncIterators pour les flux d'événements. Worker Threads pour le parallélisme : Pour les tâches lourdes en CPU, évitant de bloquer l'event loop principal. Expérience de développement améliorée : Intégration du mode --watch (remplace nodemon) et du support --env-file (remplace dotenv). Sécurité et performance : Modèle de permission expérimental pour restreindre l'accès et des hooks de performance natifs pour le monitoring. Distribution simplifiée : Création d'exécutables uniques pour faciliter le déploiement d'applications ou d'outils en ligne de commande. Sortie de Apache EChart 6 après 12 ans ! https://echarts.apache.org/handbook/en/basics/release-note/v6-feature/ Apache ECharts 6.0 : Sortie officielle après 12 ans d'évolution. 12 mises à niveau majeures pour la visualisation de données. Trois dimensions clés d'amélioration : Présentation visuelle plus professionnelle : Nouveau thème par défaut (design moderne). Changement dynamique de thème. Prise en charge du mode sombre. Extension des limites de l'expression des données : Nouveaux types de graphiques : Diagramme de cordes (Chord Chart), Nuage de points en essaim (Beeswarm Chart). Nouvelles fonctionnalités : Jittering pour nuages de points denses, Axes coupés (Broken Axis). Graphiques boursiers améliorés Liberté de composition : Nouveau système de coordonnées matriciel. Séries personnalisées améliorées (réutilisation du code, publication npm). Nouveaux graphiques personnalisés inclus (violon, contour, etc.). Optimisation de l'agencement des étiquettes d'axe. Data et Intelligence Artificielle Grok 4 s'est pris pour un nazi à cause des tools https://techcrunch.com/2025/07/15/xai-says-it-has-fixed-grok–4s-problematic-responses/ À son lancement, Grok 4 a généré des réponses offensantes, notamment en se surnommant « MechaHitler » et en adoptant des propos antisémites. Ce comportement provenait d'une recherche automatique sur le web qui a mal interprété un mème viral comme une vérité. Grok alignait aussi ses réponses controversées sur les opinions d'Elon Musk et de xAI, ce qui a amplifié les biais. xAI a identifié que ces dérapages étaient dus à une mise à jour interne intégrant des instructions encourageant un humour offensant et un alignement avec Musk. Pour corriger cela, xAI a supprimé le code fautif, remanié les prompts système, et imposé des directives demandant à Grok d'effectuer une analyse indépendante, en utilisant des sources diverses. Grok doit désormais éviter tout biais, ne plus adopter un humour politiquement incorrect, et analyser objectivement les sujets sensibles. xAI a présenté ses excuses, précisant que ces dérapages étaient dus à un problème de prompt et non au modèle lui-même. Cet incident met en lumière les défis persistants d'alignement et de sécurité des modèles d'IA face aux injections indirectes issues du contenu en ligne. La correction n'est pas qu'un simple patch technique, mais un exemple des enjeux éthiques et de responsabilité majeurs dans le déploiement d'IA à grande échelle. Guillaume a sorti toute une série d'article sur les patterns agentiques avec le framework ADK pour Java https://glaforge.dev/posts/2025/07/29/mastering-agentic-workflows-with-adk-the-recap/ Un premier article explique comment découper les tâches en sous-agents IA : https://glaforge.dev/posts/2025/07/23/mastering-agentic-workflows-with-adk-sub-agents/ Un deuxième article détaille comment organiser les agents de manière séquentielle : https://glaforge.dev/posts/2025/07/24/mastering-agentic-workflows-with-adk-sequential-agent/ Un troisième article explique comment paralleliser des tâches indépendantes : https://glaforge.dev/posts/2025/07/25/mastering-agentic-workflows-with-adk-parallel-agent/ Et enfin, comment faire des boucles d'amélioration : https://glaforge.dev/posts/2025/07/28/mastering-agentic-workflows-with-adk-loop-agents/ Tout ça évidemment en Java :slightly_smiling_face: 6 semaines de code avec Claude https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/ Orta partage son retour après 6 semaines d'utilisation quotidienne de Claude Code, qui a profondément changé sa manière de coder. Il ne « code » plus vraiment ligne par ligne, mais décrit ce qu'il veut, laisse Claude proposer une solution, puis corrige ou ajuste. Cela permet de se concentrer sur le résultat plutôt que sur l'implémentation, comme passer de la peinture au polaroid. Claude s'avère particulièrement utile pour les tâches de maintenance : migrations, refactors, nettoyage de code. Il reste toujours en contrôle, révise chaque diff généré, et guide l'IA via des prompts bien cadrés. Il note qu'il faut quelques semaines pour prendre le bon pli : apprendre à découper les tâches et formuler clairement les attentes. Les tâches simples deviennent quasi instantanées, mais les tâches complexes nécessitent encore de l'expérience et du discernement. Claude Code est vu comme un très bon copilote, mais ne remplace pas le rôle du développeur qui comprend l'ensemble du système. Le gain principal est une vitesse de feedback plus rapide et une boucle d'itération beaucoup plus courte. Ce type d'outil pourrait bien redéfinir la manière dont on pense et structure le développement logiciel à moyen terme. Claude Code et les serveurs MCP : ou comment transformer ton terminal en assistant surpuissant https://touilleur-express.fr/2025/07/27/claude-code-et-les-serveurs-mcp-ou-comment-transformer-ton-terminal-en-assistant-surpuissant/ Nicolas continue ses études sur Claude Code et explique comment utiliser les serveurs MCP pour rendre Claude bien plus efficace. Le MCP Context7 montre comment fournir à l'IA la doc technique à jour (par exemple, Next.js 15) pour éviter les hallucinations ou les erreurs. Le MCP Task Master, autre serveur MCP, transforme un cahier des charges (PRD) en tâches atomiques, estimées, et organisées sous forme de plan de travail. Le MCP Playwright permet de manipuler des navigateurs et d'executer des tests E2E Le MCP Digital Ocean permet de déployer facilement l'application en production Tout n'est pas si ideal, les quotas sont atteints en quelques heures sur une petite application et il y a des cas où il reste bien plus efficace de le faire soit-même (pour un codeur expérimenté) Nicolas complète cet article avec l'écriture d'un MVP en 20 heures: https://touilleur-express.fr/2025/07/30/comment-jai-code-un-mvp-en-une-vingtaine-dheures-avec-claude-code/ Le développement augmenté, un avis politiquement correct, mais bon… https://touilleur-express.fr/2025/07/31/le-developpement-augmente-un-avis-politiquement-correct-mais-bon/ Nicolas partage un avis nuancé (et un peu provoquant) sur le développement augmenté, où l'IA comme Claude Code assiste le développeur sans le remplacer. Il rejette l'idée que cela serait « trop magique » ou « trop facile » : c'est une évolution logique de notre métier, pas un raccourci pour les paresseux. Pour lui, un bon dev reste celui qui structure bien sa pensée, sait poser un problème, découper, valider — même si l'IA aide à coder plus vite. Il raconte avoir codé une app OAuth, testée, stylisée et déployée en quelques heures, sans jamais quitter le terminal grâce à Claude. Ce genre d'outillage change le rapport au temps : on passe de « je vais y réfléchir » à « je tente tout de suite une version qui marche à peu près ». Il assume aimer cette approche rapide et imparfaite : mieux vaut une version brute livrée vite qu'un projet bloqué par le perfectionnisme. L'IA est selon lui un super stagiaire : jamais fatigué, parfois à côté de la plaque, mais diablement productif quand bien briefé. Il conclut que le « dev augmenté » ne remplace pas les bons développeurs… mais les développeurs moyens doivent s'y mettre, sous peine d'être dépassés. ChatGPT lance le mode d'étude : un apprentissage interactif pas à pas https://openai.com/index/chatgpt-study-mode/ OpenAI propose un mode d'étude dans ChatGPT qui guide les utilisateurs pas à pas plutôt que de donner directement la réponse. Ce mode vise à encourager la réflexion active et l'apprentissage en profondeur. Il utilise des instructions personnalisées pour poser des questions et fournir des explications adaptées au niveau de l'utilisateur. Le mode d'étude favorise la gestion de la charge cognitive et stimule la métacognition. Il propose des réponses structurées pour faciliter la compréhension progressive des sujets. Disponible dès maintenant pour les utilisateurs connectés, ce mode sera intégré dans ChatGPT Edu. L'objectif est de transformer ChatGPT en un véritable tuteur numérique, aidant les étudiants à mieux assimiler les connaissances. A priori Gemini viendrait de sortir un fonctionnalité similaire Lancement de GPT-OSS par OpenAI https://openai.com/index/introducing-gpt-oss/ https://openai.com/index/gpt-oss-model-card/ OpenAI a lancé GPT-OSS, sa première famille de modèles open-weight depuis GPT–2. Deux modèles sont disponibles : gpt-oss–120b et gpt-oss–20b, qui sont des modèles mixtes d'experts conçus pour le raisonnement et les tâches d'agent. Les modèles sont distribués sous licence Apache 2.0, permettant leur utilisation et leur personnalisation gratuites, y compris pour des applications commerciales. Le modèle gpt-oss–120b est capable de performances proches du modèle OpenAI o4-mini, tandis que le gpt-oss–20b est comparable au o3-mini. OpenAI a également open-sourcé un outil de rendu appelé Harmony en Python et Rust pour en faciliter l'adoption. Les modèles sont optimisés pour fonctionner localement et sont pris en charge par des plateformes comme Hugging Face et Ollama. OpenAI a mené des recherches sur la sécurité pour s'assurer que les modèles ne pouvaient pas être affinés pour des utilisations malveillantes dans les domaines biologique, chimique ou cybernétique. Anthropic lance Opus 4.1 https://www.anthropic.com/news/claude-opus–4–1 Anthropic a publié Claude Opus 4.1, une mise à jour de son modèle de langage. Cette nouvelle version met l'accent sur l'amélioration des performances en codage, en raisonnement et sur les tâches de recherche et d'analyse de données. Le modèle a obtenu un score de 74,5 % sur le benchmark SWE-bench Verified, ce qui représente une amélioration par rapport à la version précédente. Il excelle notamment dans la refactorisation de code multifichier et est capable d'effectuer des recherches approfondies. Claude Opus 4.1 est disponible pour les utilisateurs payants de Claude, ainsi que via l'API, Amazon Bedrock et Vertex AI de Google Cloud, avec des tarifs identiques à ceux d'Opus 4. Il est présenté comme un remplacement direct de Claude Opus 4, avec des performances et une précision supérieures pour les tâches de programmation réelles. OpenAI Summer Update. GPT–5 is out https://openai.com/index/introducing-gpt–5/ Détails https://openai.com/index/gpt–5-new-era-of-work/ https://openai.com/index/introducing-gpt–5-for-developers/ https://openai.com/index/gpt–5-safe-completions/ https://openai.com/index/gpt–5-system-card/ Amélioration majeure des capacités cognitives - GPT‑5 montre un niveau de raisonnement, d'abstraction et de compréhension nettement supérieur aux modèles précédents. Deux variantes principales - gpt-5-main : rapide, efficace pour les tâches générales. gpt-5-thinking : plus lent mais spécialisé dans les tâches complexes, nécessitant réflexion profonde. Routeur intelligent intégré - Le système sélectionne automatiquement la version la plus adaptée à la tâche (rapide ou réfléchie), sans intervention de l'utilisateur. Fenêtre de contexte encore étendue - GPT‑5 peut traiter des volumes de texte plus longs (jusqu'à 1 million de tokens dans certaines versions), utile pour des documents ou projets entiers. Réduction significative des hallucinations - GPT‑5 donne des réponses plus fiables, avec moins d'erreurs inventées ou de fausses affirmations. Comportement plus neutre et moins sycophant - Il a été entraîné pour mieux résister à l'alignement excessif avec les opinions de l'utilisateur. Capacité accrue à suivre des instructions complexes - GPT‑5 comprend mieux les consignes longues, implicites ou nuancées. Approche “Safe completions” - Remplacement des “refus d'exécution” par des réponses utiles mais sûres — le modèle essaie de répondre avec prudence plutôt que bloquer. Prêt pour un usage professionnel à grande échelle - Optimisé pour le travail en entreprise : rédaction, programmation, synthèse, automatisation, gestion de tâches, etc. Améliorations spécifiques pour le codage - GPT‑5 est plus performant pour l'écriture de code, la compréhension de contextes logiciels complexes, et l'usage d'outils de développement. Expérience utilisateur plus rapide et fluide- Le système réagit plus vite grâce à une orchestration optimisée entre les différents sous-modèles. Capacités agentiques renforcées - GPT‑5 peut être utilisé comme base pour des agents autonomes capables d'accomplir des objectifs avec peu d'interventions humaines. Multimodalité maîtrisée (texte, image, audio) - GPT‑5 intègre de façon plus fluide la compréhension de formats multiples, dans un seul modèle. Fonctionnalités pensées pour les développeurs - Documentation plus claire, API unifiée, modèles plus transparents et personnalisables. Personnalisation contextuelle accrue - Le système s'adapte mieux au style, ton ou préférences de l'utilisateur, sans instructions répétées. Utilisation énergétique et matérielle optimisée - Grâce au routeur interne, les ressources sont utilisées plus efficacement selon la complexité des tâches. Intégration sécurisée dans les produits ChatGPT - Déjà déployé dans ChatGPT avec des bénéfices immédiats pour les utilisateurs Pro et entreprises. Modèle unifié pour tous les usages - Un seul système capable de passer de la conversation légère à des analyses scientifiques ou du code complexe. Priorité à la sécurité et à l'alignement - GPT‑5 a été conçu dès le départ pour minimiser les abus, biais ou comportements indésirables. Pas encore une AGI - OpenAI insiste : malgré ses capacités impressionnantes, GPT‑5 n'est pas une intelligence artificielle générale. Non, non, les juniors ne sont pas obsolètes malgré l'IA ! (dixit GitHub) https://github.blog/ai-and-ml/generative-ai/junior-developers-arent-obsolete-heres-how-to-thrive-in-the-age-of-ai/ L'IA transforme le développement logiciel, mais les développeurs juniors ne sont pas obsolètes. Les nouveaux apprenants sont bien positionnés, car déjà familiers avec les outils IA. L'objectif est de développer des compétences pour travailler avec l'IA, pas d'être remplacé. La créativité et la curiosité sont des qualités humaines clés. Cinq façons de se démarquer : Utiliser l'IA (ex: GitHub Copilot) pour apprendre plus vite, pas seulement coder plus vite (ex: mode tuteur, désactiver l'autocomplétion temporairement). Construire des projets publics démontrant ses compétences (y compris en IA). Maîtriser les workflows GitHub essentiels (GitHub Actions, contribution open source, pull requests). Affûter son expertise en révisant du code (poser des questions, chercher des patterns, prendre des notes). Déboguer plus intelligemment et rapidement avec l'IA (ex: Copilot Chat pour explications, corrections, tests). Ecrire son premier agent IA avec A2A avec WildFly par Emmanuel Hugonnet https://www.wildfly.org/news/2025/08/07/Building-your-First-A2A-Agent/ Protocole Agent2Agent (A2A) : Standard ouvert pour l'interopérabilité universelle des agents IA. Permet communication et collaboration efficaces entre agents de différents fournisseurs/frameworks. Crée des écosystèmes multi-agents unifiés, automatisant les workflows complexes. Objet de l'article : Guide pour construire un premier agent A2A (agent météo) dans WildFly. Utilise A2A Java SDK pour Jakarta Servers, WildFly AI Feature Pack, un LLM (Gemini) et un outil Python (MCP). Agent conforme A2A v0.2.5. Prérequis : JDK 17+, Apache Maven 3.8+, IDE Java, Google AI Studio API Key, Python 3.10+, uv. Étapes de construction de l'agent météo : Création du service LLM : Interface Java (WeatherAgent) utilisant LangChain4J pour interagir avec un LLM et un outil Python MCP (fonctions get_alerts, get_forecast). Définition de l'agent A2A (via CDI) : ▪︎ Agent Card : Fournit les métadonnées de l'agent (nom, description, URL, capacités, compétences comme “weather_search”). Agent Executor : Gère les requêtes A2A entrantes, extrait le message utilisateur, appelle le service LLM et formate la réponse. Exposition de l'agent : Enregistrement d'une application JAX-RS pour les endpoints. Déploiement et test : Configuration de l'outil A2A-inspector de Google (via un conteneur Podman). Construction du projet Maven, configuration des variables d'environnement (ex: GEMINI_API_KEY). Lancement du serveur WildFly. Conclusion : Transformation minimale d'une application IA en agent A2A. Permet la collaboration et le partage d'informations entre agents IA, indépendamment de leur infrastructure sous-jacente. Outillage IntelliJ IDEa bouge vers une distribution unifiée https://blog.jetbrains.com/idea/2025/07/intellij-idea-unified-distribution-plan/ À partir de la version 2025.3, IntelliJ IDEA Community Edition ne sera plus distribuée séparément. Une seule version unifiée d'IntelliJ IDEA regroupera les fonctionnalités des éditions Community et Ultimate. Les fonctionnalités avancées de l'édition Ultimate seront accessibles via abonnement. Les utilisateurs sans abonnement auront accès à une version gratuite enrichie par rapport à l'édition Community actuelle. Cette unification vise à simplifier l'expérience utilisateur et réduire les différences entre les éditions. Les utilisateurs Community seront automatiquement migrés vers cette nouvelle version unifiée. Il sera possible d'activer les fonctionnalités Ultimate temporairement d'un simple clic. En cas d'expiration d'abonnement Ultimate, l'utilisateur pourra continuer à utiliser la version installée avec un jeu limité de fonctionnalités gratuites, sans interruption. Ce changement reflète l'engagement de JetBrains envers l'open source et l'adaptation aux besoins de la communauté. Prise en charge des Ancres YAML dans GitHub Actions https://github.com/actions/runner/issues/1182#issuecomment–3150797791 Afin d'éviter de dupliquer du contenu dans un workflow les Ancres permettent d'insérer des morceaux réutilisables de YAML Fonctionnalité attendue depuis des années et disponible chez GitLab depuis bien longtemps. Elle a été déployée le 4 aout. Attention à ne pas en abuser car la lisibilité de tels documents n'est pas si facile Gemini CLI rajoute les custom commands comme Claude https://cloud.google.com/blog/topics/developers-practitioners/gemini-cli-custom-slash-commands Mais elles sont au format TOML, on ne peut donc pas les partager avec Claude :disappointed: Automatiser ses workflows IA avec les hooks de Claude Code https://blog.gitbutler.com/automate-your-ai-workflows-with-claude-code-hooks/ Claude Code propose des hooks qui permettent d'exécuter des scripts à différents moments d'une session, par exemple au début, lors de l'utilisation d'outils, ou à la fin. Ces hooks facilitent l'automatisation de tâches comme la gestion de branches Git, l'envoi de notifications, ou l'intégration avec d'autres outils. Un exemple simple est l'envoi d'une notification sur le bureau à la fin d'une session. Les hooks se configurent via trois fichiers JSON distincts selon le scope : utilisateur, projet ou local. Sur macOS, l'envoi de notifications nécessite une permission spécifique via l'application “Script Editor”. Il est important d'avoir une version à jour de Claude Code pour utiliser ces hooks. GitButler permet desormais de s'intégrer à Claude Code via ces hooks: https://blog.gitbutler.com/parallel-claude-code/ Le client Git de Jetbrains bientot en standalone https://lp.jetbrains.com/closed-preview-for-jetbrains-git-client/ Demandé par certains utilisateurs depuis longtemps Ca serait un client graphique du même style qu'un GitButler, SourceTree, etc Apache Maven 4 …. arrive …. l'utilitaire mvnupva vous aider à upgrader https://maven.apache.org/tools/mvnup.html Fixe les incompatibilités connues Nettoie les redondances et valeurs par defaut (versions par ex) non utiles pour Maven 4 Reformattage selon les conventions maven … Une GitHub Action pour Gemini CLI https://blog.google/technology/developers/introducing-gemini-cli-github-actions/ Google a lancé Gemini CLI GitHub Actions, un agent d'IA qui fonctionne comme un “coéquipier de code” pour les dépôts GitHub. L'outil est gratuit et est conçu pour automatiser des tâches de routine telles que le triage des problèmes (issues), l'examen des demandes de tirage (pull requests) et d'autres tâches de développement. Il agit à la fois comme un agent autonome et un collaborateur que les développeurs peuvent solliciter à la demande, notamment en le mentionnant dans une issue ou une pull request. L'outil est basé sur la CLI Gemini, un agent d'IA open-source qui amène le modèle Gemini directement dans le terminal. Il utilise l'infrastructure GitHub Actions, ce qui permet d'isoler les processus dans des conteneurs séparés pour des raisons de sécurité. Trois flux de travail (workflows) open-source sont disponibles au lancement : le triage intelligent des issues, l'examen des pull requests et la collaboration à la demande. Pas besoin de MCP, le code est tout ce dont vous avez besoin https://lucumr.pocoo.org/2025/7/3/tools/ Armin souligne qu'il n'est pas fan du protocole MCP (Model Context Protocol) dans sa forme actuelle : il manque de composabilité et exige trop de contexte. Il remarque que pour une même tâche (ex. GitHub), utiliser le CLI est souvent plus rapide et plus efficace en termes de contexte que passer par un serveur MCP. Selon lui, le code reste la solution la plus simple et fiable, surtout pour automatiser des tâches répétitives. Il préfère créer des scripts clairs plutôt que se reposer sur l'inférence LLM : cela facilite la vérification, la maintenance et évite les erreurs subtiles. Pour les tâches récurrentes, si on les automatise, mieux vaut le faire avec du code reusable, plutôt que de laisser l'IA deviner à chaque fois. Il illustre cela en convertissant son blog entier de reStructuredText à Markdown : plutôt qu'un usage direct d'IA, il a demandé à Claude de générer un script complet, avec parsing AST, comparaison des fichiers, validation et itération. Ce workflow LLM→code→LLM (analyse et validation) lui a donné confiance dans le résultat final, tout en conservant un contrôle humain sur le processus. Il juge que MCP ne permet pas ce type de pipeline automatisé fiable, car il introduit trop d'inférence et trop de variations par appel. Pour lui, coder reste le meilleur moyen de garder le contrôle, la reproductibilité et la clarté dans les workflows automatisés. MCP vs CLI … https://www.async-let.com/blog/my-take-on-the-mcp-verses-cli-debate/ Cameron raconte son expérience de création du serveur XcodeBuildMCP, qui lui a permis de mieux comprendre le débat entre servir l'IA via MCP ou laisser l'IA utiliser directement les CLI du système. Selon lui, les CLIs restent préférables pour les développeurs experts recherchant contrôle, transparence, performance et simplicité. Mais les serveurs MCP excellent sur les workflows complexes, les contextes persistants, les contraintes de sécurité, et facilitent l'accès pour les utilisateurs moins expérimentés. Il reconnaît la critique selon laquelle MCP consomme trop de contexte (« context bloat ») et que les appels CLI peuvent être plus rapides et compréhensibles. Toutefois, il souligne que beaucoup de problèmes proviennent de la qualité des implémentations clients, pas du protocole MCP en lui‑même. Pour lui, un bon serveur MCP peut proposer des outils soigneusement définis qui simplifient la vie de l'IA (par exemple, renvoyer des données structurées plutôt que du texte brut à parser). Il apprécie la capacité des MCP à offrir des opérations état‑durables (sessions, mémoire, logs capturés), ce que les CLI ne gèrent pas naturellement. Certains scénarios ne peuvent pas fonctionner via CLI (pas de shell accessible) alors que MCP, en tant que protocole indépendant, reste utilisable par n'importe quel client. Son verdict : pas de solution universelle — chaque contexte mérite d'être évalué, et on ne devrait pas imposer MCP ou CLI à tout prix. Jules, l'agent de code asynchrone gratuit de Google, est sorti de beta et est disponible pour tout le monde https://blog.google/technology/google-labs/jules-now-available/ Jules, agent de codage asynchrone, est maintenant publiquement disponible. Propulsé par Gemini 2.5 Pro. Phase bêta : 140 000+ améliorations de code et retours de milliers de développeurs. Améliorations : interface utilisateur, corrections de bugs, réutilisation des configurations, intégration GitHub Issues, support multimodal. Gemini 2.5 Pro améliore les plans de codage et la qualité du code. Nouveaux paliers structurés : Introductif, Google AI Pro (limites 5x supérieures), Google AI Ultra (limites 20x supérieures). Déploiement immédiat pour les abonnés Google AI Pro et Ultra, incluant les étudiants éligibles (un an gratuit de AI Pro). Architecture Valoriser la réduction de la dette technique : un vrai défi https://www.lemondeinformatique.fr/actualites/lire-valoriser-la-reduction-de-la-dette-technique-mission-impossible–97483.html La dette technique est un concept mal compris et difficile à valoriser financièrement auprès des directions générales. Les DSI ont du mal à mesurer précisément cette dette, à allouer des budgets spécifiques, et à prouver un retour sur investissement clair. Cette difficulté limite la priorisation des projets de réduction de dette technique face à d'autres initiatives jugées plus urgentes ou stratégiques. Certaines entreprises intègrent progressivement la gestion de la dette technique dans leurs processus de développement. Des approches comme le Software Crafting visent à améliorer la qualité du code pour limiter l'accumulation de cette dette. L'absence d'outils adaptés pour mesurer les progrès rend la démarche encore plus complexe. En résumé, réduire la dette technique reste une mission délicate qui nécessite innovation, méthode et sensibilisation en interne. Il ne faut pas se Mocker … https://martinelli.ch/why-i-dont-use-mocking-frameworks-and-why-you-might-not-need-them-either/ https://blog.tremblay.pro/2025/08/not-using-mocking-frmk.html L'auteur préfère utiliser des fakes ou stubs faits à la main plutôt que des frameworks de mocking comme Mockito ou EasyMock. Les frameworks de mocking isolent le code, mais entraînent souvent : Un fort couplage entre les tests et les détails d'implémentation. Des tests qui valident le mock plutôt que le comportement réel. Deux principes fondamentaux guident son approche : Favoriser un design fonctionnel, avec logique métier pure (fonctions sans effets de bord). Contrôler les données de test : par exemple en utilisant des bases réelles (via Testcontainers) plutôt que de simuler. Dans sa pratique, les seuls cas où un mock externe est utilisé concernent les services HTTP externes, et encore il préfère en simuler seulement le transport plutôt que le comportement métier. Résultat : les tests deviennent plus simples, plus rapides à écrire, plus fiables, et moins fragiles aux évolutions du code. L'article conclut que si tu conçois correctement ton code, tu pourrais très bien ne pas avoir besoin de frameworks de mocking du tout. Le blog en réponse d'Henri Tremblay nuance un peu ces retours Méthodologies C'est quoi être un bon PM ? (Product Manager) Article de Chris Perry, un PM chez Google : https://thechrisperry.substack.com/p/being-a-good-pm-at-google Le rôle de PM est difficile : Un travail exigeant, où il faut être le plus impliqué de l'équipe pour assurer le succès. 1. Livrer (shipper) est tout ce qui compte : La priorité absolue. Mieux vaut livrer et itérer rapidement que de chercher la perfection en théorie. Un produit livré permet d'apprendre de la réalité. 2. Donner l'envie du grand large : La meilleure façon de faire avancer un projet est d'inspirer l'équipe avec une vision forte et désirable. Montrer le “pourquoi”. 3. Utiliser son produit tous les jours : Non négociable pour réussir. Permet de développer une intuition et de repérer les vrais problèmes que la recherche utilisateur ne montre pas toujours. 4. Être un bon ami : Créer des relations authentiques et aider les autres est un facteur clé de succès à long terme. La confiance est la base d'une exécution rapide. 5. Donner plus qu'on ne reçoit : Toujours chercher à aider et à collaborer. La stratégie optimale sur la durée est la coopération. Ne pas être possessif avec ses idées. 6. Utiliser le bon levier : Pour obtenir une décision, il faut identifier la bonne personne qui a le pouvoir de dire “oui”, et ne pas se laisser bloquer par des avis non décisionnaires. 7. N'aller que là où on apporte de la valeur : Combler les manques, faire le travail ingrat que personne ne veut faire. Savoir aussi s'écarter (réunions, projets) quand on n'est pas utile. 8. Le succès a plusieurs parents, l'échec est orphelin : Si le produit réussit, c'est un succès d'équipe. S'il échoue, c'est la faute du PM. Il faut assumer la responsabilité finale. Conclusion : Le PM est un chef d'orchestre. Il ne peut pas jouer de tous les instruments, mais son rôle est d'orchestrer avec humilité le travail de tous pour créer quelque chose d'harmonieux. Tester des applications Spring Boot prêtes pour la production : points clés https://www.wimdeblauwe.com/blog/2025/07/30/how-i-test-production-ready-spring-boot-applications/ L'auteur (Wim Deblauwe) détaille comment il structure ses tests dans une application Spring Boot destinée à la production. Le projet inclut automatiquement la dépendance spring-boot-starter-test, qui regroupe JUnit 5, AssertJ, Mockito, Awaitility, JsonAssert, XmlUnit et les outils de testing Spring. Tests unitaires : ciblent les fonctions pures (record, utilitaire), testés simplement avec JUnit et AssertJ sans démarrage du contexte Spring. Tests de cas d'usage (use case) : orchestrent la logique métier, généralement via des use cases qui utilisent un ou plusieurs dépôts de données. Tests JPA/repository : vérifient les interactions avec la base via des tests realisant des opérations CRUD (avec un contexte Spring pour la couche persistance). Tests de contrôleur : permettent de tester les endpoints web (ex. @WebMvcTest), souvent avec MockBean pour simuler les dépendances. Tests d'intégration complets : ils démarrent tout le contexte Spring (@SpringBootTest) pour tester l'application dans son ensemble. L'auteur évoque également des tests d'architecture, mais sans entrer dans le détail dans cet article. Résultat : une pyramide de tests allant des plus rapides (unitaires) aux plus complets (intégration), garantissant fiabilité, vitesse et couverture sans surcharge inutile. Sécurité Bitwarden offre un serveur MCP pour que les agents puissent accéder aux mots de passe https://nerds.xyz/2025/07/bitwarden-mcp-server-secure-ai/ Bitwarden introduit un serveur MCP (Model Context Protocol) destiné à intégrer de manière sécurisée les agents IA dans les workflows de gestion de mots de passe. Ce serveur fonctionne en architecture locale (local-first) : toutes les interactions et les données sensibles restent sur la machine de l'utilisateur, garantissant l'application du principe de chiffrement zero‑knowledge. L'intégration se fait via l'interface CLI de Bitwarden, permettant aux agents IA de générer, récupérer, modifier et verrouiller les identifiants via des commandes sécurisées. Le serveur peut être auto‑hébergé pour un contrôle maximal des données. Le protocole MCP est un standard ouvert qui permet de connecter de façon uniforme des agents IA à des sources de données et outils tiers, simplifiant les intégrations entre LLM et applications. Une démo avec Claude (agent IA d'Anthropic) montre que l'IA peut interagir avec le coffre Bitwarden : vérifier l'état, déverrouiller le vault, générer ou modifier des identifiants, le tout sans intervention humaine directe. Bitwarden affiche une approche priorisant la sécurité, mais reconnaît les risques liés à l'utilisation d'IA autonome. L'usage d'un LLM local privé est fortement recommandé pour limiter les vulnérabilités. Si tu veux, je peux aussi te résumer les enjeux principaux (interopérabilité, sécurité, cas d'usage) ou un extrait spécifique ! NVIDIA a une faille de securite critique https://www.wiz.io/blog/nvidia-ai-vulnerability-cve–2025–23266-nvidiascape Il s'agit d'une faille d'évasion de conteneur dans le NVIDIA Container Toolkit. La gravité est jugée critique avec un score CVSS de 9.0. Cette vulnérabilité permet à un conteneur malveillant d'obtenir un accès root complet sur l'hôte. L'origine du problème vient d'une mauvaise configuration des hooks OCI dans le toolkit. L'exploitation peut se faire très facilement, par exemple avec un Dockerfile de seulement trois lignes. Le risque principal concerne la compromission de l'isolation entre différents clients sur des infrastructures cloud GPU partagées. Les versions affectées incluent toutes les versions du NVIDIA Container Toolkit jusqu'à la 1.17.7 et du NVIDIA GPU Operator jusqu'à la version 25.3.1. Pour atténuer le risque, il est recommandé de mettre à jour vers les dernières versions corrigées. En attendant, il est possible de désactiver certains hooks problématiques dans la configuration pour limiter l'exposition. Cette faille met en lumière l'importance de renforcer la sécurité des environnements GPU partagés et la gestion des conteneurs AI. Fuite de données de l'application Tea : points essentiels https://knowyourmeme.com/memes/events/the-tea-app-data-leak Tea est une application lancée en 2023 qui permet aux femmes de laisser des avis anonymes sur des hommes rencontrés. En juillet 2025, une importante fuite a exposé environ 72 000 images sensibles (selfies, pièces d'identité) et plus d'1,1 million de messages privés. La fuite a été révélée après qu'un utilisateur ait partagé un lien pour télécharger la base de données compromise. Les données touchées concernaient majoritairement des utilisateurs inscrits avant février 2024, date à laquelle l'application a migré vers une infrastructure plus sécurisée. En réponse, Tea prévoit de proposer des services de protection d'identité aux utilisateurs impactés. Faille dans le paquet npm is : attaque en chaîne d'approvisionnement https://socket.dev/blog/npm-is-package-hijacked-in-expanding-supply-chain-attack Une campagne de phishing ciblant les mainteneurs npm a compromis plusieurs comptes, incluant celui du paquet is. Des versions compromises du paquet is (notamment les versions 3.3.1 et 5.0.0) contenaient un chargeur de malware JavaScript destiné aux systèmes Windows. Ce malware a offert aux attaquants un accès à distance via WebSocket, permettant potentiellement l'exécution de code arbitraire. L'attaque fait suite à d'autres compromissions de paquets populaires comme eslint-config-prettier, eslint-plugin-prettier, synckit, @pkgr/core, napi-postinstall, et got-fetch. Tous ces paquets ont été publiés sans aucun commit ou PR sur leurs dépôts GitHub respectifs, signalant un accès non autorisé aux tokens mainteneurs. Le domaine usurpé [npnjs.com](http://npnjs.com) a été utilisé pour collecter les jetons d'accès via des emails de phishing trompeurs. L'épisode met en lumière la fragilité des chaînes d'approvisionnement logicielle dans l'écosystème npm et la nécessité d'adopter des pratiques renforcées de sécurité autour des dépendances. Revues de sécurité automatisées avec Claude Code https://www.anthropic.com/news/automate-security-reviews-with-claude-code Anthropic a lancé des fonctionnalités de sécurité automatisées pour Claude Code, un assistant de codage d'IA en ligne de commande. Ces fonctionnalités ont été introduites en réponse au besoin croissant de maintenir la sécurité du code alors que les outils d'IA accélèrent considérablement le développement de logiciels. Commande /security-review : les développeurs peuvent exécuter cette commande dans leur terminal pour demander à Claude d'identifier les vulnérabilités de sécurité, notamment les risques d'injection SQL, les vulnérabilités de script intersite (XSS), les failles d'authentification et d'autorisation, ainsi que la gestion non sécurisée des données. Claude peut également suggérer et implémenter des correctifs. Intégration GitHub Actions : une nouvelle action GitHub permet à Claude Code d'analyser automatiquement chaque nouvelle demande d'extraction (pull request). L'outil examine les modifications de code pour y trouver des vulnérabilités, applique des règles personnalisables pour filtrer les faux positifs et commente directement la demande d'extraction avec les problèmes détectés et les correctifs recommandés. Ces fonctionnalités sont conçues pour créer un processus d'examen de sécurité cohérent et s'intégrer aux pipelines CI/CD existants, ce qui permet de s'assurer qu'aucun code n'atteint la production sans un examen de sécurité de base. Loi, société et organisation Google embauche les personnes clés de Windsurf https://www.blog-nouvelles-technologies.fr/333959/openai-windsurf-google-deepmind-codage-agentique/ windsurf devait être racheté par OpenAI Google ne fait pas d'offre de rachat mais débauche quelques personnes clés de Windsurf Windsurf reste donc indépendante mais sans certains cerveaux y compris son PDG. Les nouveaux dirigeants sont les ex leaders des force de vente Donc plus une boîte tech Pourquoi le deal a 3 milliard est tombé à l'eau ? On ne sait pas mais la divergence et l‘indépendance technologique est possiblement en cause. Les transfuge vont bosser chez Deepmind dans le code argentique Opinion Article: https://www.linkedin.com/pulse/dear-people-who-think-ai-low-skilled-code-monkeys-future-jan-moser-svade/ Jan Moser critique ceux qui pensent que l'IA et les développeurs peu qualifiés peuvent remplacer les ingénieurs logiciels compétents. Il cite l'exemple de l'application Tea, une plateforme de sécurité pour femmes, qui a exposé 72 000 images d'utilisateurs en raison d'une mauvaise configuration de Firebase et d'un manque de pratiques de développement sécurisées. Il souligne que l'absence de contrôles automatisés et de bonnes pratiques de sécurité a permis cette fuite de données. Moser avertit que des outils comme l'IA ne peuvent pas compenser l'absence de compétences en génie logiciel, notamment en matière de sécurité, de gestion des erreurs et de qualité du code. Il appelle à une reconnaissance de la valeur des ingénieurs logiciels qualifiés et à une approche plus rigoureuse dans le développement logiciel. YouTube déploie une technologie d'estimation d'âge pour identifier les adolescents aux États-Unis https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/ Sujet très à la mode, surtout au UK mais pas que… YouTube commence à déployer une technologie d'estimation d'âge basée sur l'IA pour identifier les utilisateurs adolescents aux États-Unis, indépendamment de l'âge déclaré lors de l'inscription. Cette technologie analyse divers signaux comportementaux, tels que l'historique de visionnage, les catégories de vidéos consultées et l'âge du compte. Lorsqu'un utilisateur est identifié comme adolescent, YouTube applique des protections supplémentaires, notamment : Désactivation des publicités personnalisées. Activation des outils de bien-être numérique, tels que les rappels de temps d'écran et de coucher. Limitation de la visualisation répétée de contenus sensibles, comme ceux liés à l'image corporelle. Si un utilisateur est incorrectement identifié comme mineur, il peut vérifier son âge via une pièce d'identité gouvernementale, une carte de crédit ou un selfie. Ce déploiement initial concerne un petit groupe d'utilisateurs aux États-Unis et sera étendu progressivement. Cette initiative s'inscrit dans les efforts de YouTube pour renforcer la sécurité des jeunes utilisateurs en ligne. Mistral AI : contribution à un standard environnemental pour l'IA https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Mistral AI a réalisé la première analyse de cycle de vie complète d'un modèle d'IA, en collaboration avec plusieurs partenaires. L'étude quantifie l'impact environnemental du modèle Mistral Large 2 sur les émissions de gaz à effet de serre, la consommation d'eau, et l'épuisement des ressources. La phase d'entraînement a généré 20,4 kilotonnes de CO₂ équivalent, consommé 281 000 m³ d'eau, et utilisé 660 kg SB-eq (mineral consumption). Pour une réponse de 400 tokens, l'impact marginal est faible mais non négligeable : 1,14 gramme de CO₂, 45 mL d'eau, et 0,16 mg d'équivalent antimoine. Mistral propose trois indicateurs pour évaluer cet impact : l'impact absolu de l'entraînement, l'impact marginal de l'inférence, et le ratio inference/impact total sur le cycle de vie. L'entreprise souligne l'importance de choisir le modèle en fonction du cas d'usage pour limiter l'empreinte environnementale. Mistral appelle à plus de transparence et à l'adoption de standards internationaux pour permettre une comparaison claire entre modèles. L'IA promettait plus d'efficacité… elle nous fait surtout travailler plus https://afterburnout.co/p/ai-promised-to-make-us-more-efficient Les outils d'IA devaient automatiser les tâches pénibles et libérer du temps pour les activités stratégiques et créatives. En réalité, le temps gagné est souvent aussitôt réinvesti dans d'autres tâches, créant une surcharge. Les utilisateurs croient être plus productifs avec l'IA, mais les données contredisent cette impression : une étude montre que les développeurs utilisant l'IA prennent 19 % de temps en plus pour accomplir leurs tâches. Le rapport DORA 2024 observe une baisse de performance globale des équipes lorsque l'usage de l'IA augmente : –1,5 % de throughput et –7,2 % de stabilité de livraison pour +25 % d'adoption de l'IA. L'IA ne réduit pas la charge mentale, elle la déplace : rédaction de prompts, vérification de résultats douteux, ajustements constants… Cela épuise et limite le temps de concentration réelle. Cette surcharge cognitive entraîne une forme de dette mentale : on ne gagne pas vraiment du temps, on le paie autrement. Le vrai problème vient de notre culture de la productivité, qui pousse à toujours vouloir optimiser, quitte à alimenter l'épuisement professionnel. Trois pistes concrètes : Repenser la productivité non en temps gagné, mais en énergie préservée. Être sélectif dans l'usage des outils IA, en fonction de son ressenti et non du battage médiatique. Accepter la courbe en J : l'IA peut être utile, mais nécessite des ajustements profonds pour produire des gains réels. Le vrai hack de productivité ? Parfois, ralentir pour rester lucide et durable. Conférences MCP Submit Europe https://mcpdevsummit.ai/ Retour de JavaOne en 2026 https://inside.java/2025/08/04/javaone-returns–2026/ JavaOne, la conférence dédiée à la communauté Java, fait son grand retour dans la Bay Area du 17 au 19 mars 2026. Après le succès de l'édition 2025, ce retour s'inscrit dans la continuité de la mission initiale de la conférence : rassembler la communauté pour apprendre, collaborer et innover. La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 25–27 août 2025 : SHAKA Biarritz - Biarritz (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 15 septembre 2025 : Agile Tour Montpellier - Montpellier (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 22–24 septembre 2025 : Kernel Recipes - Paris (France) 22–27 septembre 2025 : La Mélée Numérique - Toulouse (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 23–24 septembre 2025 : AI Engineer Paris - Paris (France) 25 septembre 2025 : Agile Game Toulouse - Toulouse (France) 25–26 septembre 2025 : Paris Web 2025 - Paris (France) 30 septembre 2025–1 octobre 2025 : PyData Paris 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6–7 octobre 2025 : Swift Connection 2025 - Paris (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 7–8 octobre 2025 : Agile en Seine - Issy-les-Moulineaux (France) 8–10 octobre 2025 : SIG 2025 - Paris (France) & Online 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9–10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 17–19 octobre 2025 : OpenInfra Summit Europe - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30–31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30–31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025–2 novembre 2025 : PyConFR 2025 - Lyon (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 5–6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15–16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 19–21 novembre 2025 : Agile Grenoble - Grenoble (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1–2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 4–5 décembre 2025 : Agile Tour Rennes - Rennes (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9–11 décembre 2025 : APIdays Paris - Paris (France) 9–11 décembre 2025 : Green IO Paris - Paris (France) 10–11 décembre 2025 : Devops REX - Paris (France) 10–11 décembre 2025 : Open Source Experience - Paris (France) 11 décembre 2025 : Normandie.ai 2025 - Rouen (France) 28–31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2–6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12–13 février 2026 : Touraine Tech #26 - Tours (France) 22–24 avril 2026 : Devoxx France 2026 - Paris (France) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

time community ai power google uk internet guide france pr building spring data elon musk microsoft chatgpt attention mvp phase dans construction agent tests windows bay area ces patterns tout tea ia pas limitations faire distribution openai gemini extension runner nvidia passage rust blue sky api retour conf agile cela python gpt toujours sb nouveau ml unis linux java trois github priorit guillaume mieux activation int libert aur jest savoir selon num donner valid armin bom lam certains javascript exposition documentation apache opus mod donc llm nouvelles arnaud contr prise changement cpu maven nouveaux gpu m1 parfois travailler google cloud exp ast dns normandie certaines tester aff cinq vall construire counted sql principales lorsqu verified grok moser node git loi utiliser pdg cloudflare sujet sortie afin sig lancement anthropic fen deepmind accepter ssl gitlab axes spel optimisation enregistr mocha mongodb toutefois ci cd modules json capacit configuration mistral xai paris france permet aot orta cli github copilot objet mcp comportement utilisation repenser montrer capitole enregistrement prd fuite jit ecrire appels favoriser fixe firebase sse commande oauth crud jep vache oci bgp jetbrains bitwarden swe nuage github actions windsurf livrer propuls mistral ai faille xss a2a optimis mocker remplacement websockets stagiaire automatiser chris perry cvss devcon revues spring boot personnalisation tom l jdk lyon france podman vertex ai adk bordeaux france jfr profilage amazon bedrock diagramme script editor junit clis dockerfile javaone provence france testcontainers toulouse france strasbourg france github issues commonjs lille france codeurs micrometer sourcetree dijon france devoxx france
A New Untold Story
Crud - A New Untold Story: Ep. 460

A New Untold Story

Play Episode Listen Later Aug 7, 2025 54:20


nick, kyle, and rudy are back in chicago ready to get pumped up. Ads: Gametime - Download the Gametime app today and use code UNTOLD for $20 off your first purchase Rocket Money - Cancel your unwanted subscriptions and reach your financial goals faster at https://RocketMoney.com/untold today. Chubbies - Your new wardrobe awaits! Get $10 off @chubbies with the code anus at https://www.chubbiesshorts.com/anus #chubbiespod Betterhelp - Get 10% off your first month at https://BetterHelp.com/NEW. Courtyard.IO - Use code ANUS15 at checkout for 15% off on your first rip on $25, $50, or $100 packs. Terms apply. https://link.courtyard.io/anusYou can find every episode of this show on Apple Podcasts, Spotify or YouTube. Prime Members can listen ad-free on Amazon Music. For more, visit barstool.link/anuspodcast

Ax Church
Want Unity? Filter the Crud

Ax Church

Play Episode Listen Later Jul 27, 2025 37:26


To live in unity we need to use the filter of the Holy Spirt to get rid of the crud.

Funfiltered
Vol. 24 - "Litany of Crud"

Funfiltered

Play Episode Listen Later Jul 16, 2025 111:51


This week, Jordan and I (Sam) go Hindi indie with SISTER MIDNIGHT, coast through THE SURFER, attempt to escape the CLOWN IN A CORNFIELD's maize, exorcise THE RITUAL, consider the cheerful symmetry of THE PHOENICIAN SCHEME, ask LILO & STITCH howzit, prey upon PREDATOR: KILLER OF KILLERS, approach Wick's end with BALLERINA, suggest The Weeknd getaway as we plead HURRY UP TOMORROW, juke it out with SINNERS and consign THE PEOPLE'S JOKER to The Podcast's Arkham.

CheapShow
Ep 444: Beep Beep Me Phone Went

CheapShow

Play Episode Listen Later Jul 11, 2025 88:24


Blimey, something amazing has come in the post! After weeks of waiting, Eli and Paul get their hands on a test pressing of the CheapShow album, which would be exciting if it wasn't for the awkward threat of Mr Postie! It's not the only thing that has come in the post either! There is a USA flavoured “Price of Shite”, or rather “Cost of Crud”, this week! The question it raises is “Can Eli continue his hot streak?” and how much will that upset Paul? It's a P.O.S packed with trivia, tat and trinkets for Eli to gush over! Finally, we return to the often forgotten “Tales from the Shop Floor” segment with an email guaranteed to upset everyone… Which includes its author, Paul and Eli and (worst of all) CheapShow listeners. It's another rollercoaster of an episode! See pics/videos for this episode on our website: https://www.thecheapshow.co.uk/ep-444-beep-beep-me-phone-went SEE US LIVE: Oct 18th @ The Cheerful Earful Podcast Festival 2.30pm, London https://cheerfulearful.podlifeevents.com/festival/cheapshow---live-from-cheerful-earful-podcast-festival-18th-oct-2025-tickets Watch Our 10th Birthday YouTube Live Stream! https://youtube.com/live/Z18i8M3Eqac?feature=share And if you like us, why not support us: www.patreon.com/cheapshow If you want to get involved, email us at thecheapshow@gmail.com For all other information, please visit: www.thecheapshow.co.uk Like, Review, Share, Comment... LOVE US! MERCH Official CheapShow Magazine Shop: www.cheapmag.shop Send Us Stuff: CheapShow PO BOX 1309 Harrow HA1 9QJ

Rose Chat Podcast
CRITTERS & CRUD:  Crown Gall & Chili Thrips

Rose Chat Podcast

Play Episode Listen Later Jun 9, 2025 47:54


CRITTERS & CRUD:  Crown Gall & Chili Thrips Gaye Hammond, Master Rosarian On this episode of Rose Chat, host Teresa Byington is joined by Gaye Hammond. Gaye will tackle two garden trouble makers, Crown Gall and Chili Thrips. It is hard to overstate the impact Gaye Hammond has had on the world of roses. Any time you have the opportunity to hear Gaye speak, we highly recommend you do! Roses and rose lovers have benefited greatly from the expertise and energy she gives every project … from her work with the Houston rose society - leading them into their becoming one of the largest and most active roses societies with a membership nearing 500 to RRD research, rose history, rose trials and more. Listen in as we benefit from the hours of research she puts into every project.

Develop Yourself
#241 - The Bootcamp Era Is Dead — And That's Great News for Developers

Develop Yourself

Play Episode Listen Later May 22, 2025 17:56 Transcription Available


Send a text and I may answer it on next episode (I cannot reply from this service

CheapShow
Ep 430: The Cost Of Crud

CheapShow

Play Episode Listen Later Apr 4, 2025 70:02


After two weeks away in America, Eli arrives back in one piece and ready to go with a grab bag of tat and tasty treats to foist upon his co-host. He's brought with him a few snacks to wrap their tongues around and a special USA flavoured “Price of Shite” … Sorry, “The Cost of Crud”, as Eli would prefer to call it. To make matters more pointless, he decides that points are not “p'twings” this week and instead they are “p'twangs”. Whatever they are, Gannon wants them, and he's going to try his very best. However, Paul may have gone a bit doolally whilst he had no co-host for a while, so please bear that in mind as he continuously threatens Eli with his “Chobber”. What is this “Chobber” of which he speaks? I mean, it's obvious, right? But you better listen in just to check. See pics/videos for this episode on our website: https://www.thecheapshow.co.uk/ep-430-the-cost-of-crud And if you like us, why not support us: www.patreon.com/cheapshow If you want to get involved, email us at thecheapshow@gmail.com And if you want to, follow us on Twitter/X @thecheapshowpod or @paulgannonshow & @elisnoid www.thecheapshow.co.uk Now on Threads: @cheapshowpod Like, Review, Share, Comment... LOVE US! MERCH Official CheapShow Magazine Shop: www.cheapmag.shop Send Us Stuff: CheapShow PO BOX 1309 Harrow HA1 9QJ

Irish Tech News Audio Articles
The Irish Government is set to launch the Sham Coin, Irelands first official cryptocurrency, as it aims to become number 1 crypto nation in EU

Irish Tech News Audio Articles

Play Episode Listen Later Apr 1, 2025 3:45


The crypto world has had an interesting 2025 so far. It all started in late January 2025 after President Trump, a big crypto fan took office. President Trump proclaimed his love for all things crypto before launching his meme coin earlier this year. Elon Musk, his BFF and head of Trump's new DOGE department has been promoting various cryptocurrencies. All of this has resulted in a big uptake in people buying cryptocurrencies. Two days before President Trump was sworn in, Bitcoin hit a high of $104,536.90. This was due to the new American President being a big crypto fan, and the fact he is looking to launch an official American digital currency. All of this has helped to legitimise cryptocurrencies and prompt various countries to investigate launching their own digital currencies. Irish Tech News can exclusively reveal that the Irish Government has kept a close eye on what President Trump has been doing in the Crypto World. Within days of the current Government forming, the cabinet took time out from the speaking row debacle to launch a feasibility study on Ireland having their own digital currency. The Minister of Finance, Pascal Donohoe, got the boffins in the Department of Finance's new digital hub, Crypto Revolutionary Universal Digital (CRUD). The head of CRUD, Professor Rashers O'Toole, quickly got the buckos under him working on this. Professor O'Toole told Irish Tech News "With tariffs on the way, we had to act fast to protect the Irish economy. The only logical way of doing this was to invest in our own digital currency. " Once the feasibility study showed beyond doubt that an Irish digital currency had great potential, the next step was to think up a name for the new Irish digital asset. Professor O'Toole explained what happened next. "The team in CRUD brainstormed over various names before coming up with a shortlist. The following names were on the shortlist, The Lucky Charms Coin, The Leprechaun Coin, and The Shamrock Coin. "The shortlist was soon whittled down to the Shamrock Coin after various focus groups were held. One major point that was brought up in the focus groups, was that The Shamrock Coin name was too cliched and the name was shortened to the Sham Coin." When the Taoiseach met President Trump last month, he privately disclosed the idea of creating an Irish digital coin. President Trump was said to be enthusiastic and over the moon at what Ireland is planning to do. "This could be the second greatest thing after the Trump meme coin." Is what President is said to have told our Taoiseach Micheal Martin before he left the White House. The possibilities, for crypto is endless, and the Irish Government are convinced that they are on to a winner. After various setbacks involving, housing, health, bike sheds and phone pouch's, the Government need to get the general public back on side. Various opinion polls over the past couple of months have not been kind too the Government and something like this will help get them back in the general public's good books. Ministers Martin, Harris and Donohoe, are not yet in the last chance saloon hoping this gamble pays off. Crypto Coin future plans Plans are now in full swing to launch the Sham Coin before the summer, but before that can happen a sale price has to be agreed. The consensus at this moment in time is to make the Sham Coin price affordable, so that as many as possible can buy it. In the next couple of months Minister Donohoe, and Professor O'Toole will hold a press conference to launch the Sham Coin, so keep an eye out for that. See more stories here.

Sigmund Fraud
Gerald Sanders with Matt Mack

Sigmund Fraud

Play Episode Listen Later Mar 31, 2025 48:40


Gerald Sanders is a man who has struggled to be known as good at anything. But excelling to much at his most recent hobby has not fully been the success story he had hoped for. And with a Sigmund Fraud like Ian Herrin as his therapist, Gerald might be doomed to violence and vigilantism simply because he's not getting the therapeutic perspective he so desperately needs. Get out of there, Gerald, RETREAT!! Ah, now you must be the opposite of all thumbs, because you've found the best part of the episode description! The part where I tell you all about the wonderful actor and improviser Matt Mack. You can catch Matt at The People's Improv Theater playing with Small Town Tall Tales and CRUD! with shows coming up Wednesday April 23rd 6:30pm at the Loft and Friday May 23rd 9pm at the Anex.

airhacks.fm podcast with adam bien
Enterprise LLM Integration: Bridging Java and AI in Business Applications

airhacks.fm podcast with adam bien

Play Episode Listen Later Mar 30, 2025 65:08


An airhacks.fm conversation with Burr Sutter (@burrsutter) about: discussion about integrating LLMs into enterprise Java applications, challenges with non-deterministic LLM outputs in deterministic code environments, limitations of chat interfaces for power users in enterprise settings, preference for form-based applications with prompts running behind the scenes, using LLMs to understand unstructured data while providing structured interfaces, maintaining existing CRUD systems while using LLMs for unstructured data like emails and support tickets, practical examples of using LLMs to generate code from business requirements, creating assistants with system messages and short user prompts, potential for embeddings to replace text prompts in the future, developer journey in learning LLM integration including prompts, tools, RAG, and agentic workflows, benefits of specialized agents over one general agent, using LLMs for code generation with limitations for complex use cases, hybrid approaches combining LLMs with human oversight, using LLMs for email routing and support case classification, potential for extracting knowledge from enterprise data sources like Confluence and SharePoint, quality assurance with LLM judges, discussion of small language models versus large ones, model distillation and fine-tuning for specific enterprise use cases, cost considerations for model training versus using off-the-shelf models with better tool invocation, prediction that models will become more efficient and run on commodity hardware in the future, focus on post-training inference and reliable results Burr Sutter on twitter: @burrsutter

Thinking Elixir Podcast
245: Supply Chain Security and SBoMs

Thinking Elixir Podcast

Play Episode Listen Later Mar 18, 2025 74:36


News includes a new library called phoenix_sync for real-time sync in Postgres-backed Phoenix applications, Peter Solnica released a Text Parser for extracting structured data from text, a useful tip on finding Hex package versions locally with mix hex.info, Wasmex updated to v0.10 with WebAssembly component support, and Chrome introduces a new browser feature similar to LiveView.JS. We also talked with Alistair Woodman and Jonatan Männchen from the EEF about Jonatan's role as CISO, the Security Working Group, and their work on OpenChain compliance for supply-chain security, Software Bill of Materials (SBoMs), and what these initiatives mean for the Elixir community, and more! Show Notes online - http://podcast.thinkingelixir.com/245 (http://podcast.thinkingelixir.com/245) Elixir Community News https://gigalixir.com/thinking (https://gigalixir.com/thinking?utm_source=thinkingelixir&utm_medium=shownotes) – Gigalixir is sponsoring the show, offering 20% off standard tier prices for a year with promo code "Thinking". https://github.com/electric-sql/phoenix_sync (https://github.com/electric-sql/phoenix_sync?utm_source=thinkingelixir&utm_medium=shownotes) – New library called phoenix_sync providing real-time sync for Postgres-backed Phoenix applications. https://hexdocs.pm/phoenix_sync/readme.html (https://hexdocs.pm/phoenix_sync/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for phoenix_sync, a solution for building modern, real-time apps with local-first/sync in Elixir. https://github.com/josevalim/sync (https://github.com/josevalim/sync?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim's original proof of concept repo that was promptly archived. https://electric-sql.com/ (https://electric-sql.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Electric SQL's platform that syncs subsets of Postgres data into local apps and services, allowing data to be available offline and in-sync. https://solnic.dev/posts/announcing-textparser-for-elixir/ (https://solnic.dev/posts/announcing-textparser-for-elixir/?utm_source=thinkingelixir&utm_medium=shownotes) – Peter Solnica released TextParser, a library for extracting interesting parts of text like hashtags and links. https://hexdocs.pm/text_parser/readme.html (https://hexdocs.pm/text_parser/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation for the Text Parser library that helps parse text into structured data. https://www.elixirstreams.com/tips/mix-hex-info (https://www.elixirstreams.com/tips/mix-hex-info?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir stream tip on using mix hex.info to find the latest package version for a Hex package locally, without needing to search on hex.pm or GitHub. https://github.com/phoenixframework/tailwind/blob/main/README.md#updating-from-tailwind-v3-to-v4 (https://github.com/phoenixframework/tailwind/blob/main/README.md#updating-from-tailwind-v3-to-v4?utm_source=thinkingelixir&utm_medium=shownotes) – Guide for upgrading Tailwind to V4 in existing Phoenix applications using Tailwind's automatic upgrade helper. https://gleam.run/news/hello-echo-hello-git/ (https://gleam.run/news/hello-echo-hello-git/?utm_source=thinkingelixir&utm_medium=shownotes) – Gleam 1.9.0 release with searchability on hexdocs, Echo debug printing for improved debugging, and ability to depend on Git-hosted dependencies. https://d-gate.io/blog/everything-i-was-lied-to-about-node-came-true-with-elixir (https://d-gate.io/blog/everything-i-was-lied-to-about-node-came-true-with-elixir?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post discussing how promises made about NodeJS actually came true with Elixir. https://hexdocs.pm/wasmex/Wasmex.Components.html (https://hexdocs.pm/wasmex/Wasmex.Components.html?utm_source=thinkingelixir&utm_medium=shownotes) – Wasmex updated to v0.10 with support for WebAssembly components, enabling applications and components to work together regardless of original programming language. https://ashweekly.substack.com/p/ash-weekly-issue-8 (https://ashweekly.substack.com/p/ash-weekly-issue-8?utm_source=thinkingelixir&utm_medium=shownotes) – AshWeekly Issue 8 covering AshOps with mix task capabilities for CRUD operations and BeaconCMS being included in the Ash HQ installer script. https://developer.chrome.com/blog/command-and-commandfor (https://developer.chrome.com/blog/command-and-commandfor?utm_source=thinkingelixir&utm_medium=shownotes) – Chrome update brings new browser feature with commandfor and command attributes, similar to Phoenix LiveView.JS but native to browsers. https://codebeamstockholm.com/ (https://codebeamstockholm.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Code BEAM Lite announced for Stockholm on June 2, 2025 with keynote speaker Björn Gustavsson, the "B" in BEAM. https://alchemyconf.com/ (https://alchemyconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – AlchemyConf coming up March 31-April 3 in Braga, Portugal. Use discount code THINKINGELIXIR for 10% off. https://www.gigcityelixir.com/ (https://www.gigcityelixir.com/?utm_source=thinkingelixir&utm_medium=shownotes) – GigCity Elixir and NervesConf on May 8-10, 2025 in Chattanooga, TN, USA. https://www.elixirconf.eu/ (https://www.elixirconf.eu/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf EU on May 15-16, 2025 in Kraków & Virtual. https://goatmire.com/#tickets (https://goatmire.com/#tickets?utm_source=thinkingelixir&utm_medium=shownotes) – Goatmire tickets are on sale now for the conference on September 10-12, 2025 in Varberg, Sweden. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources https://elixir-lang.org/blog/2025/02/26/elixir-openchain-certification/ (https://elixir-lang.org/blog/2025/02/26/elixir-openchain-certification/?utm_source=thinkingelixir&utm_medium=shownotes) https://cna.erlef.org/ (https://cna.erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF CVE Numbering Authority https://erlangforums.com/t/security-working-group-minutes/3451/22 (https://erlangforums.com/t/security-working-group-minutes/3451/22?utm_source=thinkingelixir&utm_medium=shownotes) https://podcast.thinkingelixir.com/220 (https://podcast.thinkingelixir.com/220?utm_source=thinkingelixir&utm_medium=shownotes) – previous interview with Alistair https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act?utm_source=thinkingelixir&utm_medium=shownotes) – CRA - Cyber Resilience Act https://www.cisa.gov/ (https://www.cisa.gov/?utm_source=thinkingelixir&utm_medium=shownotes) – CISA US Government Agency https://www.cisa.gov/sbom (https://www.cisa.gov/sbom?utm_source=thinkingelixir&utm_medium=shownotes) – Software Bill of Materials https://oss-review-toolkit.org/ort/ (https://oss-review-toolkit.org/ort/?utm_source=thinkingelixir&utm_medium=shownotes) – Desire to integrate with tooling outside the Elixir ecosystem like OSS Review Toolkit https://github.com/voltone/rebar3_sbom (https://github.com/voltone/rebar3_sbom?utm_source=thinkingelixir&utm_medium=shownotes) https://cve.mitre.org/ (https://cve.mitre.org/?utm_source=thinkingelixir&utm_medium=shownotes) https://openssf.org/projects/guac/ (https://openssf.org/projects/guac/?utm_source=thinkingelixir&utm_medium=shownotes) https://erlef.github.io/security-wg/securityvulnerabilitydisclosure/ (https://erlef.github.io/security-wg/security_vulnerability_disclosure/?utm_source=thinkingelixir&utm_medium=shownotes) – EEF Security WG Vulnerability Disclosure Guide Guest Information - https://x.com/maennchen_ (https://x.com/maennchen_?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Twitter/X - https://bsky.app/profile/maennchen.dev (https://bsky.app/profile/maennchen.dev?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Bluesky - https://github.com/maennchen/ (https://github.com/maennchen/?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan on Github - https://maennchen.dev (https://maennchen.dev?utm_source=thinkingelixir&utm_medium=shownotes) – Jonatan's Blog - https://www.linkedin.com/in/alistair-woodman-51934433 (https://www.linkedin.com/in/alistair-woodman-51934433?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair Woodman on LinkedIn - awoodman@erlef.org - https://github.com/ahw59/ (https://github.com/ahw59/?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair on Github - http://erlef.org/ (http://erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – Erlang Ecosystem Foundation Website Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Nobody Listens to Paula Poundstone
Dear Crinkle Ep 30: Saving face on FB, Brush crud and Kitten advice.

Nobody Listens to Paula Poundstone

Play Episode Listen Later Mar 13, 2025 27:58


Join Paula's manager, Bonnie Burns aka Captain Crinkle, Paula Poundstone, Adam Felber, and former pod producer Toni Anita Hull for Captain Crinkle's sage advice. This week's problems: Fending off FB ‘friend' requests, a college student exhausted by her kitten, and a victim of black crud on their dishwashing brush. Learn more about your ad choices. Visit megaphone.fm/adchoices

Maino and the Mayor
The Crud & Wrastlin’

Maino and the Mayor

Play Episode Listen Later Feb 28, 2025 44:32


Bellin Sports Medicine's Mark Husen switches gears and talks about this "crud" that's going around. Whether it's Influenza A or bad colds, these sicknesses are sticking around for more than a few days. Mark makes some great points about when you should use the emergency room and when you should use urgent care. Apparently, our emergency rooms are overflowing, and it is slowing down the process of seeing patients properly. Always great advice from Mark and Bellin. Then TJ Bowles and Aaron Arsenal join from HWE - Hybrid Wrestling Entertainment. Their "Game On" event is tonight at Badger State Brewing and Jim and John are ring announcers for one of the matches! Maino and the Mayor is a part of the Civic Media radio network and airs Monday through Friday from 6-9 am on WGBW in Green Bay and on WISS in Appleton/Oshkosh. Subscribe to the podcast to be sure not to miss out on a single episode! To learn more about the show and all of the programming across the Civic Media network, head over to https://civicmedia.us/shows to see the entire broadcast lineup. Follow the show on Facebook and X to keep up with Maino and the Mayor! Guests: Mark Husen, TJ Bowles, Aaron Arsenal

Your Favorite Thing with Wells & Brandi
Goat Brains, Lotion Crud and Sexy Feet?

Your Favorite Thing with Wells & Brandi

Play Episode Listen Later Feb 26, 2025 51:23


YFTers, it's almost March - somehow we are two months into 2025 already. Anywayyy, this week, Wells brings us up to speed on his new golf themed podcast that just launched called The Vanity Index—so break out those polos and single white leather gloves. Brandi is dealing with ITCHY boobs and she cannot stop touching them. PSA: If you see Brandi at dinner, please remind her to keep her hands off the goods. In the TV corner: Traitors keeps being straight-up Crazytown, USA and TikTok is spiraling over Danielle's dramatics (and those hats), and we are living for it. Meanwhile, The Bachelor races along at Mach 8 (or 9, or maybe 10??) in what feels like the shortest season ever, making it real hard to invest in these so-called love stories. Wells also has a White Lotus theory—what if the monkeys are actually the murderers?

Tell Me Somethin' Good!
254. The Impact of Wallowing in the Crud

Tell Me Somethin' Good!

Play Episode Listen Later Jan 28, 2025 14:19


In this episode of the Tell Me Somethin' Good podcast, Clint talks about the impact of wallowing in the crud. When life throws us challenges, we face a choice: stay stuck in negativity or rise above it. Join Clint for the first of six powerful lessons from his Tell Me Somethin' Good! book as we uncover how to take control and move toward a more positive life. Check it out!! ---------- If you like the podcast, you'll love the Tell Me Somethin' Good! book. Check it out: Tell Me Somethin' Good! - https://www.tinyurl.com/yxcsg3sh ---------- Have Clint bring his message of positivity to your organization, either in person or virtually. Check out his Speaker Video   ---------- Follow me: Twitter: https://www.twitter.com/clintswindall Instagram: https://www.instagram.com/tmsg_clintswindall/ Facebook: https://www.facebook.com/clintswindall2 YouTube: https://www.youtube.com/c/clintswindall LinkedIn: https://www.linkedin.com/in/clint-swindall-csp-9047174/ ---------- Part of the Win Make Give Podcast Network  

Soft Skills Engineering
Episode 444: Surrounded by apathetic coworkers and put it on my resume?

Soft Skills Engineering

Play Episode Listen Later Jan 20, 2025 31:10


In this episode, Dave and Jamison answer these questions: After a decade as a Senior front-end engineer in companies stuck in legacy ways of working—paying lip service to true agility while clinging to control-heavy, waterfall practices—I'm frustrated and exhausted by meetings and largely apathetic, outsourced teams who don't match my enthusiasm for product-thinking or improving things. It seems allowed and normalised everywhere I go. How can I escape this cycle of big tech, unfulfilled as an engineer, and find a team with a strong product engineering culture where I can do high-impact work with similarly empowered teams? Thank you, and sorry if this is a bit verbose! Thanks guys. Martin ‌ How do you judge your competency in a technical skill and when should you include it on your resume? Should you include a skills that you haven't used in a while, skills you've only used in personal projects, or skills that you feel you only have a basic understanding of? I'm a frontend developer and I've seen some job descriptions include requirements (not nice-to-haves) like backend experience, Java, CI/CD, and UI/UX design using tools like Figma and Photoshop. I could make designs or write the backend code for a basic CRUD app, but it would take me some time, especially if I'm building things from scratch. I've seen some resumes where the writer lists a bunch of programming languages and technical skills, and I often wonder if they truly are competent in all of those skills.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the NYC AI Engineer Summit, focused on Agents at Work, are open!When we first started Latent Space, in the lightning round we'd always ask guests: “What's your favorite AI product?”. The majority would say Midjourney. The simple UI of prompt → very aesthetic image turned it into a $300M+ ARR bootstrapped business as it rode the first wave of AI image generation.In open source land, StableDiffusion was congregating around AUTOMATIC1111 as the de-facto web UI. Unlike Midjourney, which offered some flags but was mostly prompt-driven, A1111 let users play with a lot more parameters, supported additional modalities like img2img, and allowed users to load in custom models. If you're interested in some of the SD history, you can look at our episodes with Lexica, Replicate, and Playground.One of the people involved with that community was comfyanonymous, who was also part of the Stability team in 2023, decided to build an alternative called ComfyUI, now one of the fastest growing open source projects in generative images, and is now the preferred partner for folks like Black Forest Labs's Flux Tools on Day 1. The idea behind it was simple: “Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use.”Unlike its predecessors, ComfyUI does not have an input text box. Everything is based around the idea of a node: there's a text input node, a CLIP node, a checkpoint loader node, a KSampler node, a VAE node, etc. While daunting for simple image generation, the tool is amazing for more complex workflows since you can break down every step of the process, and then chain many of them together rather than manually switching between tools. You can also re-start execution halfway instead of from the beginning, which can save a lot of time when using larger models.To give you an idea of some of the new use cases that this type of UI enables:* Sketch something → Generate an image with SD from sketch → feed it into SD Video to animate* Generate an image of an object → Turn into a 3D asset → Feed into interactive experiences* Input audio → Generate audio-reactive videosTheir Examples page also includes some of the more common use cases like AnimateDiff, etc. They recently launched the Comfy Registry, an online library of different nodes that users can pull from rather than having to build everything from scratch. The project has >60,000 Github stars, and as the community grows, some of the projects that people build have gotten quite complex:The most interesting thing about Comfy is that it's not a UI, it's a runtime. You can build full applications on top of image models simply by using Comfy. You can expose Comfy workflows as an endpoint and chain them together just like you chain a single node. We're seeing the rise of AI Engineering applied to art.Major Tom's ComfyUI Resources from the Latent Space DiscordMajor shoutouts to Major Tom on the LS Discord who is a image generation expert, who offered these pointers:* “best thing about comfy is the fact it supports almost immediately every new thing that comes out - unlike A1111 or forge, which still don't support flux cnet for instance. It will be perfect tool when conflicting nodes will be resolved”* AP Workflows from Alessandro Perili are a nice example of an all-in-one train-evaluate-generate system built atop Comfy* ComfyUI YouTubers to learn from:* @sebastiankamph* @NerdyRodent* @OlivioSarikas* @sedetweiler* @pixaroma* ComfyUI Nodes to check out:* https://github.com/kijai/ComfyUI-IC-Light* https://github.com/MrForExample/ComfyUI-3D-Pack* https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait* https://github.com/pydn/ComfyUI-to-Python-Extension* https://github.com/THtianhao/ComfyUI-Portrait-Maker* https://github.com/ssitu/ComfyUI_NestedNodeBuilder* https://github.com/longgui0318/comfyui-magic-clothing* https://github.com/atmaranto/ComfyUI-SaveAsScript* https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID* https://github.com/AIFSH/ComfyUI-FishSpeech* https://github.com/coolzilj/ComfyUI-Photopea* https://github.com/lks-ai/anynode* Sarav: https://www.youtube.com/@mickmumpitz/videos ( applied stuff )* Sarav: https://www.youtube.com/@latentvision (technical, but infrequent)* look for comfyui node for https://github.com/magic-quill/MagicQuill* “Comfy for Video” resources* Kijai (https://github.com/kijai) pushing out support for Mochi, CogVideoX, AnimateDif, LivePortrait etc* Comfyui node support like LTX https://github.com/Lightricks/ComfyUI-LTXVideo , and HunyuanVideo* FloraFauna AI* Communities: https://www.reddit.com/r/StableDiffusion/, https://www.reddit.com/r/comfyui/Full YouTube EpisodeAs usual, you can find the full video episode on our YouTube (and don't forget to like and subscribe!)Timestamps* 00:00:04 Introduction of hosts and anonymous guest* 00:00:35 Origins of Comfy UI and early Stable Diffusion landscape* 00:02:58 Comfy's background and development of high-res fix* 00:05:37 Area conditioning and compositing in image generation* 00:07:20 Discussion on different AI image models (SD, Flux, etc.)* 00:11:10 Closed source model APIs and community discussions on SD versions* 00:14:41 LoRAs and textual inversion in image generation* 00:18:43 Evaluation methods in the Comfy community* 00:20:05 CLIP models and text encoders in image generation* 00:23:05 Prompt weighting and negative prompting* 00:26:22 Comfy UI's unique features and design choices* 00:31:00 Memory management in Comfy UI* 00:33:50 GPU market share and compatibility issues* 00:35:40 Node design and parameter settings in Comfy UI* 00:38:44 Custom nodes and community contributions* 00:41:40 Video generation models and capabilities* 00:44:47 Comfy UI's development timeline and rise to popularity* 00:48:13 Current state of Comfy UI team and future plans* 00:50:11 Discussion on other Comfy startups and potential text generation supportTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hey everyone, we are in the Chroma Studio again, but with our first ever anonymous guest, Comfy Anonymous, welcome.Comfy [00:00:19]: Hello.swyx [00:00:21]: I feel like that's your full name, you just go by Comfy, right?Comfy [00:00:24]: Yeah, well, a lot of people just call me Comfy, even when they know my real name. Hey, Comfy.Alessio [00:00:32]: Swyx is the same. You know, not a lot of people call you Shawn.swyx [00:00:35]: Yeah, you have a professional name, right, that people know you by, and then you have a legal name. Yeah, it's fine. How do I phrase this? I think people who are in the know, know that Comfy is like the tool for image generation and now other multimodality stuff. I would say that when I first got started with Stable Diffusion, the star of the show was Automatic 111, right? And I actually looked back at my notes from 2022-ish, like Comfy was already getting started back then, but it was kind of like the up and comer, and your main feature was the flowchart. Can you just kind of rewind to that moment, that year and like, you know, how you looked at the landscape there and decided to start Comfy?Comfy [00:01:10]: Yeah, I discovered Stable Diffusion in 2022, in October 2022. And, well, I kind of started playing around with it. Yes, I, and back then I was using Automatic, which was what everyone was using back then. And so I started with that because I had, it was when I started, I had no idea like how Diffusion works. I didn't know how Diffusion models work, how any of this works, so.swyx [00:01:36]: Oh, yeah. What was your prior background as an engineer?Comfy [00:01:39]: Just a software engineer. Yeah. Boring software engineer.swyx [00:01:44]: But like any, any image stuff, any orchestration, distributed systems, GPUs?Comfy [00:01:49]: No, I was doing basically nothing interesting. Crud, web development? Yeah, a lot of web development, just, yeah, some basic, maybe some basic like automation stuff. Okay. Just. Yeah, no, like, no big companies or anything.swyx [00:02:08]: Yeah, but like already some interest in automations, probably a lot of Python.Comfy [00:02:12]: Yeah, yeah, of course, Python. But I wasn't actually used to like the Node graph interface before I started Comfy UI. It was just, I just thought it was like, oh, like, what's the best way to represent the Diffusion process in the user interface? And then like, oh, well. Well, like, naturally, oh, this is the best way I've found. And this was like with the Node interface. So how I got started was, yeah, so basic October 2022, just like I hadn't written a line of PyTorch before that. So it's completely new. What happened was I kind of got addicted to generating images.Alessio [00:02:58]: As we all did. Yeah.Comfy [00:03:00]: And then I started. I started experimenting with like the high-res fixed in auto, which was for those that don't know, the high-res fix is just since the Diffusion models back then could only generate that low-resolution. So what you would do, you would generate low-resolution image, then upscale, then refine it again. And that was kind of the hack to generate high-resolution images. I really liked generating. Like higher resolution images. So I was experimenting with that. And so I modified the code a bit. Okay. What happens if I, if I use different samplers on the second pass, I was edited the code of auto. So what happens if I use a different sampler? What happens if I use a different, like a different settings, different number of steps? And because back then the. The high-res fix was very basic, just, so. Yeah.swyx [00:04:05]: Now there's a whole library of just, uh, the upsamplers.Comfy [00:04:08]: I think, I think they added a bunch of, uh, of options to the high-res fix since, uh, since, since then. But before that was just so basic. So I wanted to go further. I wanted to try it. What happens if I use a different model for the second, the second pass? And then, well, then the auto code base was, wasn't good enough for. Like, it would have been, uh, harder to implement that in the auto interface than to create my own interface. So that's when I decided to create my own. And you were doing that mostly on your own when you started, or did you already have kind of like a subgroup of people? No, I was, uh, on my own because, because it was just me experimenting with stuff. So yeah, that was it. Then, so I started writing the code January one. 2023, and then I released the first version on GitHub, January 16th, 2023. That's how things got started.Alessio [00:05:11]: And what's, what's the name? Comfy UI right away or? Yeah.Comfy [00:05:14]: Comfy UI. The reason the name, my name is Comfy is people thought my pictures were comfy, so I just, uh, just named it, uh, uh, it's my Comfy UI. So yeah, that's, uh,swyx [00:05:27]: Is there a particular segment of the community that you targeted as users? Like more intensive workflow artists, you know, compared to the automatic crowd or, you know,Comfy [00:05:37]: This was my way of like experimenting with, uh, with new things, like the high risk fixed thing I mentioned, which was like in Comfy, the first thing you could easily do was just chain different models together. And then one of the first things, I think the first times it got a bit of popularity was when I started experimenting with the different, like applying. Prompts to different areas of the image. Yeah. I called it area conditioning, posted it on Reddit and it got a bunch of upvotes. So I think that's when, like, when people first learned of Comfy UI.swyx [00:06:17]: Is that mostly like fixing hands?Comfy [00:06:19]: Uh, no, no, no. That was just, uh, like, let's say, well, it was very, well, it still is kind of difficult to like, let's say you want a mountain, you have an image and then, okay. I'm like, okay. I want the mountain here and I want the, like a, a Fox here.swyx [00:06:37]: Yeah. So compositing the image. Yeah.Comfy [00:06:40]: My way was very easy. It was just like, oh, when you run the diffusion process, you kind of generate, okay. You do pass one pass through the diffusion, every step you do one pass. Okay. This place of the image with this brand, this space, place of the image with the other prop. And then. The entire image with another prop and then just average everything together, every step, and that was, uh, area composition, which I call it. And then, then a month later, there was a paper that came out called multi diffusion, which was the same thing, but yeah, that's, uh,Alessio [00:07:20]: could you do area composition with different models or because you're averaging out, you kind of need the same model.Comfy [00:07:26]: Could do it with, but yeah, I hadn't implemented it. For different models, but, uh, you, you can do it with, uh, with different models if you want, as long as the models share the same latent space, like we, we're supposed to ring a bell every time someone says, yeah, like, for example, you couldn't use like Excel and SD 1.5, because those have a different latent space, but like, uh, yeah, like SD 1.5 models, different ones. You could, you could do that.swyx [00:07:59]: There's some models that try to work in pixel space, right?Comfy [00:08:03]: Yeah. They're very slow. Of course. That's the problem. That that's the, the reason why stable diffusion actually became like popular, like, cause was because of the latent space.swyx [00:08:14]: Small and yeah. Because it used to be latent diffusion models and then they trained it up.Comfy [00:08:19]: Yeah. Cause a pixel pixel diffusion models are just too slow. So. Yeah.swyx [00:08:25]: Have you ever tried to talk to like, like stability, the latent diffusion guys, like, you know, Robin Rombach, that, that crew. Yeah.Comfy [00:08:32]: Well, I used to work at stability.swyx [00:08:34]: Oh, I actually didn't know. Yeah.Comfy [00:08:35]: I used to work at stability. I got, uh, I got hired, uh, in June, 2023.swyx [00:08:42]: Ah, that's the part of the story I didn't know about. Okay. Yeah.Comfy [00:08:46]: So the, the reason I was hired is because they were doing, uh, SDXL at the time and they were basically SDXL. I don't know if you remember it was a base model and then a refiner model. Basically they wanted to experiment, like chaining them together. And then, uh, they saw, oh, right. Oh, this, we can use this to do that. Well, let's hire that guy.swyx [00:09:10]: But they didn't, they didn't pursue it for like SD3. What do you mean? Like the SDXL approach. Yeah.Comfy [00:09:16]: The reason for that approach was because basically they had two models and then they wanted to publish both of them. So they, they trained one on. Lower time steps, which was the refiner model. And then they, the first one was trained normally. And then they went during their test, they realized, oh, like if we string these models together are like quality increases. So let's publish that. It worked. Yeah. But like right now, I don't think many people actually use the refiner anymore, even though it is actually a full diffusion model. Like you can use it on its own. And it's going to generate images. I don't think anyone, people have mostly forgotten about it. But, uh.Alessio [00:10:05]: Can we talk about models a little bit? So stable diffusion, obviously is the most known. I know flux has gotten a lot of traction. Are there any underrated models that people should use more or what's the state of the union?Comfy [00:10:17]: Well, the, the latest, uh, state of the art, at least, yeah, for images there's, uh, yeah, there's flux. There's also SD3.5. SD3.5 is two models. There's a, there's a small one, 2.5B and there's the bigger one, 8B. So it's, it's smaller than flux. So, and it's more, uh, creative in a way, but flux, yeah, flux is the best. People should give SD3.5 a try cause it's, uh, it's different. I won't say it's better. Well, it's better for some like specific use cases. Right. If you want some to make something more like creative, maybe SD3.5. If you want to make something more consistent and flux is probably better.swyx [00:11:06]: Do you ever consider supporting the closed source model APIs?Comfy [00:11:10]: Uh, well, they, we do support them as custom nodes. We actually have some, uh, official custom nodes from, uh, different. Ideogram.swyx [00:11:20]: Yeah. I guess DALI would have one. Yeah.Comfy [00:11:23]: That's, uh, it's just not, I'm not the person that handles that. Sure.swyx [00:11:28]: Sure. Quick question on, on SD. There's a lot of community discussion about the transition from SD1.5 to SD2 and then SD2 to SD3. People still like, you know, very loyal to the previous generations of SDs?Comfy [00:11:41]: Uh, yeah. SD1.5 then still has a lot of, a lot of users.swyx [00:11:46]: The last based model.Comfy [00:11:49]: Yeah. Then SD2 was mostly ignored. It wasn't, uh, it wasn't a big enough improvement over the previous one. Okay.swyx [00:11:58]: So SD1.5, SD3, flux and whatever else. SDXL. SDXL.Comfy [00:12:03]: That's the main one. Stable cascade. Stable cascade. That was a good model. But, uh, that's, uh, the problem with that one is, uh, it got, uh, like SD3 was announced one week after. Yeah.swyx [00:12:16]: It was like a weird release. Uh, what was it like inside of stability actually? I mean, statute of limitations. Yeah. The statute of limitations expired. You know, management has moved. So it's easier to talk about now. Yeah.Comfy [00:12:27]: And inside stability, actually that model was ready, uh, like three months before, but it got, uh, stuck in, uh, red teaming. So basically the product, if that model had released or was supposed to be released by the authors, then it would probably have gotten very popular since it's a, it's a step up from SDXL. But it got all of its momentum stolen. It got stolen by the SD3 announcement. So people kind of didn't develop anything on top of it, even though it's, uh, yeah. It was a good model, at least, uh, completely mostly ignored for some reason. Likeswyx [00:13:07]: I think the naming as well matters. It seemed like a branch off of the main, main tree of development. Yeah.Comfy [00:13:15]: Well, it was different researchers that did it. Yeah. Yeah. Very like, uh, good model. Like it's the Worcestershire authors. I don't know if I'm pronouncing it correctly. Yeah. Yeah. Yeah.swyx [00:13:28]: I actually met them in Vienna. Yeah.Comfy [00:13:30]: They worked at stability for a bit and they left right after the Cascade release.swyx [00:13:35]: This is Dustin, right? No. Uh, Dustin's SD3. Yeah.Comfy [00:13:38]: Dustin is a SD3 SDXL. That's, uh, Pablo and Dome. I think I'm pronouncing his name correctly. Yeah. Yeah. Yeah. Yeah. That's very good.swyx [00:13:51]: It seems like the community is very, they move very quickly. Yeah. Like when there's a new model out, they just drop whatever the current one is. And they just all move wholesale over. Like they don't really stay to explore the full capabilities. Like if, if the stable cascade was that good, they would have AB tested a bit more. Instead they're like, okay, SD3 is out. Let's go. You know?Comfy [00:14:11]: Well, I find the opposite actually. The community doesn't like, they only jump on a new model when there's a significant improvement. Like if there's a, only like a incremental improvement, which is what, uh, most of these models are going to have, especially if you, cause, uh, stay the same parameter count. Yeah. Like you're not going to get a massive improvement, uh, into like, unless there's something big that, that changes. So, uh. Yeah.swyx [00:14:41]: And how are they evaluating these improvements? Like, um, because there's, it's a whole chain of, you know, comfy workflows. Yeah. How does, how does one part of the chain actually affect the whole process?Comfy [00:14:52]: Are you talking on the model side specific?swyx [00:14:54]: Model specific, right? But like once you have your whole workflow based on a model, it's very hard to move.Comfy [00:15:01]: Uh, not, well, not really. Well, it depends on your, uh, depends on their specific kind of the workflow. Yeah.swyx [00:15:09]: So I do a lot of like text and image. Yeah.Comfy [00:15:12]: When you do change, like most workflows are kind of going to be complete. Yeah. It's just like, you might have to completely change your prompt completely change. Okay.swyx [00:15:24]: Well, I mean, then maybe the question is really about evals. Like what does the comfy community do for evals? Just, you know,Comfy [00:15:31]: Well, that they don't really do that. It's more like, oh, I think this image is nice. So that's, uh,swyx [00:15:38]: They just subscribe to Fofr AI and just see like, you know, what Fofr is doing. Yeah.Comfy [00:15:43]: Well, they just, they just generate like it. Like, I don't see anyone really doing it. Like, uh, at least on the comfy side, comfy users, they, it's more like, oh, generate images and see, oh, this one's nice. It's like, yeah, it's not, uh, like the, the more, uh, like, uh, scientific, uh, like, uh, like checking that's more on specifically on like model side. If, uh, yeah, but there is a lot of, uh, vibes also, cause it is a like, uh, artistic, uh, you can create a very good model that doesn't generate nice images. Cause most images on the internet are ugly. So if you, if that's like, if you just, oh, I have the best model at 10th giant, it's super smart. I created on all the, like I've trained on just all the images on the internet. The images are not going to look good. So yeah.Alessio [00:16:42]: Yeah.Comfy [00:16:43]: They're going to be very consistent. But yeah. People like, it's not going to be like the, the look that people are going to be expecting from, uh, from a model. So. Yeah.swyx [00:16:54]: Can we talk about LoRa's? Cause we thought we talked about models then like the next step is probably LoRa's. Before, I actually, I'm kind of curious how LoRa's entered the tool set of the image community because the LoRa paper was 2021. And then like, there was like other methods like textual inversion that was popular at the early SD stage. Yeah.Comfy [00:17:13]: I can't even explain the difference between that. Yeah. Textual inversions. That's basically what you're doing is you're, you're training a, cause well, yeah. Stable diffusion. You have the diffusion model, you have text encoder. So basically what you're doing is training a vector that you're going to pass to the text encoder. It's basically you're training a new word. Yeah.swyx [00:17:37]: It's a little bit like representation engineering now. Yeah.Comfy [00:17:40]: Yeah. Basically. Yeah. You're just, so yeah, if you know how like the text encoder works, basically you have, you take your, your words of your product, you convert those into tokens with the tokenizer and those are converted into vectors. Basically. Yeah. Each token represents a different vector. So each word presents a vector. And those, depending on your words, that's the list of vectors that get passed to the text encoder, which is just. Yeah. Yeah. I'm just a stack of, of attention. Like basically it's a very close to LLM architecture. Yeah. Yeah. So basically what you're doing is just training a new vector. We're saying, well, I have all these images and I want to know which word does that represent? And it's going to get like, you train this vector and then, and then when you use this vector, it hopefully generates. Like something similar to your images. Yeah.swyx [00:18:43]: I would say it's like surprisingly sample efficient in picking up the concept that you're trying to train it on. Yeah.Comfy [00:18:48]: Well, people have kind of stopped doing that even though back as like when I was at Stability, we, we actually did train internally some like textual versions on like T5 XXL actually worked pretty well. But for some reason, yeah, people don't use them. And also they might also work like, like, yeah, this is something and probably have to test, but maybe if you train a textual version, like on T5 XXL, it might also work with all the other models that use T5 XXL because same thing with like, like the textual inversions that, that were trained for SD 1.5, they also kind of work on SDXL because SDXL has the, has two text encoders. And one of them is the same as the, as the SD 1.5 CLIP-L. So those, they actually would, they don't work as strongly because they're only applied to one of the text encoders. But, and the same thing for SD3. SD3 has three text encoders. So it works. It's still, you can still use your textual version SD 1.5 on SD3, but it's just a lot weaker because now there's three text encoders. So it gets even more diluted. Yeah.swyx [00:20:05]: Do people experiment a lot on, just on the CLIP side, there's like Siglip, there's Blip, like do people experiment a lot on those?Comfy [00:20:12]: You can't really replace. Yeah.swyx [00:20:14]: Because they're trained together, right? Yeah.Comfy [00:20:15]: They're trained together. So you can't like, well, what I've seen people experimenting with is a long CLIP. So basically someone fine tuned the CLIP model to accept longer prompts.swyx [00:20:27]: Oh, it's kind of like long context fine tuning. Yeah.Comfy [00:20:31]: So, so like it's, it's actually supported in Core Comfy.swyx [00:20:35]: How long is long?Comfy [00:20:36]: Regular CLIP is 77 tokens. Yeah. Long CLIP is 256. Okay. So, but the hack that like you've, if you use stable diffusion 1.5, you've probably noticed, oh, it still works if I, if I use long prompts, prompts longer than 77 words. Well, that's because the hack is to just, well, you split, you split it up in chugs of 77, your whole big prompt. Let's say you, you give it like the massive text, like the Bible or something, and it would split it up in chugs of 77 and then just pass each one through the CLIP and then just cut anything together at the end. It's not ideal, but it actually works.swyx [00:21:26]: Like the positioning of the words really, really matters then, right? Like this is why order matters in prompts. Yeah.Comfy [00:21:33]: Yeah. Like it, it works, but it's, it's not ideal, but it's what people expect. Like if, if someone gives a huge prompt, they expect at least some of the concepts at the end to be like present in the image. But usually when they give long prompts, they, they don't, they like, they don't expect like detail, I think. So that's why it works very well.swyx [00:21:58]: And while we're on this topic, prompts waiting, negative comments. Negative prompting all, all sort of similar part of this layer of the stack. Yeah.Comfy [00:22:05]: The, the hack for that, which works on CLIP, like it, basically it's just for SD 1.5, well, for SD 1.5, the prompt waiting works well because CLIP L is a, is not a very deep model. So you have a very high correlation between, you have the input token, the index of the input token vector. And the output token, they're very, the concepts are very close, closely linked. So that means if you interpolate the vector from what, well, the, the way Comfy UI does it is it has, okay, you have the vector, you have an empty prompt. So you have a, a chunk, like a CLIP output for the empty prompt, and then you have the one for your prompt. And then it interpolates from that, depending on your prompt. Yeah.Comfy [00:23:07]: So that's how it, how it does prompt waiting. But this stops working the deeper your text encoder is. So on T5X itself, it doesn't work at all. So. Wow.swyx [00:23:20]: Is that a problem for people? I mean, cause I'm used to just move, moving up numbers. Probably not. Yeah.Comfy [00:23:25]: Well.swyx [00:23:26]: So you just use words to describe, right? Cause it's a bigger language model. Yeah.Comfy [00:23:30]: Yeah. So. Yeah. So honestly it might be good, but I haven't seen many complaints on Flux that it's not working. So, cause I guess people can sort of get around it with, with language. So. Yeah.swyx [00:23:46]: Yeah. And then coming back to LoRa's, now the, the popular way to, to customize models is LoRa's. And I saw you also support Locon and LoHa, which I've never heard of before.Comfy [00:23:56]: There's a bunch of, cause what, what the LoRa is essentially is. Instead of like, okay, you have your, your model and then you want to fine tune it. So instead of like, what you could do is you could fine tune the entire thing, but that's a bit heavy. So to speed things up and make things less heavy, what you can do is just fine tune some smaller weights, like basically two, two matrices that when you multiply like two low rank matrices and when you multiply them together, gives a, represents a difference between trained weights and your base weights. So by training those two smaller matrices, that's a lot less heavy. Yeah.Alessio [00:24:45]: And they're portable. So you're going to share them. Yeah. It's like easier. And also smaller.Comfy [00:24:49]: Yeah. That's the, how LoRa's work. So basically, so when, when inferencing you, you get an inference with them pretty efficiently, like how ComputeWrite does it. It just, when you use a LoRa, it just applies it straight on the weights so that there's only a small delay at the base, like before the sampling to when it applies the weights and then it just same speed as, as before. So for, for inference, it's, it's not that bad, but, and then you have, so basically all the LoRa types like LoHa, LoCon, everything, that's just different ways of representing that like. Basically, you can call it kind of like compression, even though it's not really compression, it's just different ways of represented, like just, okay, I want to train a different on the difference on the weights. What's the best way to represent that difference? There's the basic LoRa, which is just, oh, let's multiply these two matrices together. And then there's all the other ones, which are all different algorithms. So. Yeah.Alessio [00:25:57]: So let's talk about LoRa. Let's talk about what comfy UI actually is. I think most people have heard of it. Some people might've seen screenshots. I think fewer people have built very complex workflows. So when you started, automatic was like the super simple way. What were some of the choices that you made? So the node workflow, is there anything else that stands out as like, this was like a unique take on how to do image generation workflows?Comfy [00:26:22]: Well, I feel like, yeah, back then everyone was trying to make like easy to use interface. Yeah. So I'm like, well, everyone's trying to make an easy to use interface.swyx [00:26:32]: Let's make a hard to use interface.Comfy [00:26:37]: Like, so like, I like, I don't need to do that, everyone else doing it. So let me try something like, let me try to make a powerful interface that's not easy to use. So.swyx [00:26:52]: So like, yeah, there's a sort of node execution engine. Yeah. Yeah. And it actually lists, it has this really good list of features of things you prioritize, right? Like let me see, like sort of re-executing from, from any parts of the workflow that was changed, asynchronous queue system, smart memory management, like all this seems like a lot of engineering that. Yeah.Comfy [00:27:12]: There's a lot of engineering in the back end to make things, cause I was always focused on making things work locally very well. Cause that's cause I was using it locally. So everything. So there's a lot of, a lot of thought and working by getting everything to run as well as possible. So yeah. ConfUI is actually more of a back end, at least, well, not all the front ends getting a lot more development, but, but before, before it was, I was pretty much only focused on the backend. Yeah.swyx [00:27:50]: So v0.1 was only August this year. Yeah.Comfy [00:27:54]: With the new front end. Before there was no versioning. So yeah. Yeah. Yeah.swyx [00:27:57]: And so what was the big rewrite for the 0.1 and then the 1.0?Comfy [00:28:02]: Well, that's more on the front end side. That's cause before that it was just like the UI, what, cause when I first wrote it, I just, I said, okay, how can I make, like, I can do web development, but I don't like doing it. Like what's the easiest way I can slap a node interface on this. And then I found this library. Yeah. Like JavaScript library.swyx [00:28:26]: Live graph?Comfy [00:28:27]: Live graph.swyx [00:28:28]: Usually people will go for like react flow for like a flow builder. Yeah.Comfy [00:28:31]: But that seems like too complicated. So I didn't really want to spend time like developing the front end. So I'm like, well, oh, light graph. This has the whole node interface. So, okay. Let me just plug that into, to my backend.swyx [00:28:49]: I feel like if Streamlit or Gradio offered something that you would have used Streamlit or Gradio cause it's Python. Yeah.Comfy [00:28:54]: Yeah. Yeah. Yeah.Comfy [00:29:00]: Yeah.Comfy [00:29:14]: Yeah. logic and your backend logic and just sticks them together.swyx [00:29:20]: It's supposed to be easy for you guys. If you're a Python main, you know, I'm a JS main, right? Okay. If you're a Python main, it's supposed to be easy.Comfy [00:29:26]: Yeah, it's easy, but it makes your whole software a huge mess.swyx [00:29:30]: I see, I see. So you're mixing concerns instead of separating concerns?Comfy [00:29:34]: Well, it's because... Like frontend and backend. Frontend and backend should be well separated with a defined API. Like that's how you're supposed to do it. Smart people disagree. It just sticks everything together. It makes it easy to like a huge mess. And also it's, there's a lot of issues with Gradio. Like it's very good if all you want to do is just get like slap a quick interface on your, like to show off your ML project. Like that's what it's made for. Yeah. Like there's no problem using it. Like, oh, I have my, I have my code. I just wanted a quick interface on it. That's perfect. Like use Gradio. But if you want to make something that's like a real, like real software that will last a long time and will be easy to maintain, then I would avoid it. Yeah.swyx [00:30:32]: So your criticism is Streamlit and Gradio are the same. I mean, those are the same criticisms.Comfy [00:30:37]: Yeah, Streamlit I haven't used as much. Yeah, I just looked a bit.swyx [00:30:43]: Similar philosophy.Comfy [00:30:44]: Yeah, it's similar. It's just, it just seems to me like, okay, for quick, like AI demos, it's perfect.swyx [00:30:51]: Yeah. Going back to like the core tech, like asynchronous queues, slow re-execution, smart memory management, you know, anything that you were very proud of or was very hard to figure out?Comfy [00:31:00]: Yeah. The thing that's the biggest pain in the ass is probably the memory management. Yeah.swyx [00:31:05]: Were you just paging models in and out or? Yeah.Comfy [00:31:08]: Before it was just, okay, load the model, completely unload it. Then, okay, that, that works well when you, your model are small, but if your models are big and it takes sort of like, let's say someone has a, like a, a 4090, and the model size is 10 gigabytes, that can take a few seconds to like load and load, load and load, so you want to try to keep things like in memory, in the GPU memory as much as possible. What Comfy UI does right now is it. It tries to like estimate, okay, like, okay, you're going to sample this model, it's going to take probably this amount of memory, let's remove the models, like this amount of memory that's been loaded on the GPU and then just execute it. But so there's a fine line between just because try to remove the least amount of models that are already loaded. Because as fans, like Windows drivers, and one other problem is the NVIDIA driver on Windows by default, because there's a way to, there's an option to disable that feature, but by default it, like, if you start loading, you can overflow your GPU memory and then it's, the driver's going to automatically start paging to RAM. But the problem with that is it's, it makes everything extremely slow. So when you see people complaining, oh, this model, it works, but oh, s**t, it starts slowing down a lot, that's probably what's happening. So it's basically you have to just try to get, use as much memory as possible, but not too much, or else things start slowing down, or people get out of memory, and then just find, try to find that line where, oh, like the driver on Windows starts paging and stuff. Yeah. And the problem with PyTorch is it's, it's high levels, don't have that much fine-grained control over, like, specific memory stuff, so kind of have to leave, like, the memory freeing to, to Python and PyTorch, which is, can be annoying sometimes.swyx [00:33:32]: So, you know, I think one thing is, as a maintainer of this project, like, you're designing for a very wide surface area of compute, like, you even support CPUs.Comfy [00:33:42]: Yeah, well, that's... That's just, for PyTorch, PyTorch supports CPUs, so, yeah, it's just, that's not, that's not hard to support.swyx [00:33:50]: First of all, is there a market share estimate, like, is it, like, 70% NVIDIA, like, 30% AMD, and then, like, miscellaneous on Apple, Silicon, or whatever?Comfy [00:33:59]: For Comfy? Yeah. Yeah, and, yeah, I don't know the market share.swyx [00:34:03]: Can you guess?Comfy [00:34:04]: I think it's mostly NVIDIA. Right. Because, because AMD, the problem, like, AMD works horribly on Windows. Like, on Linux, it works fine. It's, it's lower than the price equivalent NVIDIA GPU, but it works, like, you can use it, you generate images, everything works. On Linux, on Windows, you might have a hard time, so, that's the problem, and most people, I think most people who bought AMD probably use Windows. They probably aren't going to switch to Linux, so... Yeah. So, until AMD actually, like, ports their, like, raw cam to, to Windows properly, and then there's actually PyTorch, I think they're, they're doing that, they're in the process of doing that, but, until they get it, they get a good, like, PyTorch raw cam build that works on Windows, it's, like, they're going to have a hard time. Yeah.Alessio [00:35:06]: We got to get George on it. Yeah. Well, he's trying to get Lisa Su to do it, but... Let's talk a bit about, like, the node design. So, unlike all the other text-to-image, you have a very, like, deep, so you have, like, a separate node for, like, clip and code, you have a separate node for, like, the case sampler, you have, like, all these nodes. Going back to, like, the making it easy versus making it hard, but, like, how much do people actually play with all the settings, you know? Kind of, like, how do you guide people to, like, hey, this is actually going to be very impactful versus this is maybe, like, less impactful, but we still want to expose it to you?Comfy [00:35:40]: Well, I try to... I try to expose, like, I try to expose everything or, but, yeah, at least for the, but for things, like, for example, for the samplers, like, there's, like, yeah, four different sampler nodes, which go in easiest to most advanced. So, yeah, if you go, like, the easy node, the regular sampler node, that's, you have just the basic settings. But if you use, like, the sampler advanced... If you use, like, the custom advanced node, that, that one you can actually, you'll see you have, like, different nodes.Alessio [00:36:19]: I'm looking it up now. Yeah. What are, like, the most impactful parameters that you use? So, it's, like, you know, you can have more, but, like, which ones, like, really make a difference?Comfy [00:36:30]: Yeah, they all do. They all have their own, like, they all, like, for example, yeah, steps. Usually you want steps, you want them to be as low as possible. But you want, if you're optimizing your workflow, you want to, you lower the steps until, like, the images start deteriorating too much. Because that, yeah, that's the number of steps you're running the diffusion process. So, if you want things to be faster, lower is better. But, yeah, CFG, that's more, you can kind of see that as the contrast of the image. Like, if your image looks too bursty. Then you can lower the CFG. So, yeah, CFG, that's how, yeah, that's how strongly the, like, the negative versus positive prompt. Because when you sample a diffusion model, it's basically a negative prompt. It's just, yeah, positive prediction minus negative prediction.swyx [00:37:32]: Contrastive loss. Yeah.Comfy [00:37:34]: It's positive minus negative, and the CFG does the multiplier. Yeah. Yeah. Yeah, so.Alessio [00:37:41]: What are, like, good resources to understand what the parameters do? I think most people start with automatic, and then they move over, and it's, like, snap, CFG, sampler, name, scheduler, denoise. Read it.Comfy [00:37:53]: But, honestly, well, it's more, it's something you should, like, try out yourself. I don't know, you don't necessarily need to know how it works to, like, what it does. Because even if you know, like, CFGO, it's, like, positive minus negative prompt. Yeah. So the only thing you know at CFG is if it's 1.0, then that means the negative prompt isn't applied. It also means sampling is two times faster. But, yeah. But other than that, it's more, like, you should really just see what it does to the images yourself, and you'll probably get a more intuitive understanding of what these things do.Alessio [00:38:34]: Any other nodes or things you want to shout out? Like, I know the animate diff IP adapter. Those are, like, some of the most popular ones. Yeah. What else comes to mind?Comfy [00:38:44]: Not nodes, but there's, like, what I like is when some people, sometimes they make things that use ComfyUI as their backend. Like, there's a plugin for Krita that uses ComfyUI as its backend. So you can use, like, all the models that work in Comfy in Krita. And I think I've tried it once. But I know a lot of people use it, and it's probably really nice, so.Alessio [00:39:15]: What's the craziest node that people have built, like, the most complicated?Comfy [00:39:21]: Craziest node? Like, yeah. I know some people have made, like, video games in Comfy with, like, stuff like that. So, like, someone, like, I remember, like, yeah, last, I think it was last year, someone made, like, a, like, Wolfenstein 3D in Comfy. Of course. And then one of the inputs was, oh, you can generate a texture, and then it changes the texture in the game. So you can plug it to, like, the workflow. And there's a lot of, if you look there, there's a lot of crazy things people do, so. Yeah.Alessio [00:39:59]: And now there's, like, a node register that people can use to, like, download nodes. Yeah.Comfy [00:40:04]: Like, well, there's always been the, like, the ComfyUI manager. Yeah. But we're trying to make this more, like, I don't know, official, like, with, yeah, with the node registry. Because before the node registry, the, like, okay, how did your custom node get into ComfyUI manager? That's the guy running it who, like, every day he searched GitHub for new custom nodes and added dev annually to his custom node manager. So we're trying to make it less effortless. So we're trying to make it less effortless for him, basically. Yeah.Alessio [00:40:40]: Yeah. But I was looking, I mean, there's, like, a YouTube download node. There's, like, this is almost like, you know, a data pipeline more than, like, an image generation thing at this point. It's, like, you can get data in, you can, like, apply filters to it, you can generate data out.Comfy [00:40:54]: Yeah. You can do a lot of different things. Yeah. So I'm thinking, I think what I did is I made it easy to make custom nodes. So I think that helped a lot. I think that helped a lot for, like, the ecosystem because it is very easy to just make a node. So, yeah, a bit too easy sometimes. Then we have the issue where there's a lot of custom node packs which share similar nodes. But, well, that's, yeah, something we're trying to solve by maybe bringing some of the functionality into the core. Yeah. Yeah. Yeah.Alessio [00:41:36]: And then there's, like, video. People can do video generation. Yeah.Comfy [00:41:40]: Video, that's, well, the first video model was, like, stable video diffusion, which was last, yeah, exactly last year, I think. Like, one year ago. But that wasn't a true video model. So it was...swyx [00:41:55]: It was, like, moving images? Yeah.Comfy [00:41:57]: I generated video. What I mean by that is it's, like, it's still 2D Latents. It's basically what I'm trying to do. So what they did is they took SD2, and then they added some temporal attention to it, and then trained it on videos and all. So it's kind of, like, animated, like, same idea, basically. Why I say it's not a true video model is that you still have, like, the 2D Latents. Like, a true video model, like Mochi, for example, would have 3D Latents. Mm-hmm.Alessio [00:42:32]: Which means you can, like, move through the space, basically. It's the difference. You're not just kind of, like, reorienting. Yeah.Comfy [00:42:39]: And it's also, well, it's also because you have a temporal VAE. Mm-hmm. Also, like, Mochi has a temporal VAE that compresses on, like, the temporal direction, also. So that's something you don't have with, like, yeah, animated diff and stable video diffusion. They only, like, compress spatially, not temporally. Mm-hmm. Right. So, yeah. That's why I call that, like, true video models. There's, yeah, there's actually a few of them, but the one I've implemented in comfy is Mochi, because that seems to be the best one so far. Yeah.swyx [00:43:15]: We had AJ come and speak at the stable diffusion meetup. The other open one I think I've seen is COG video. Yeah.Comfy [00:43:21]: COG video. Yeah. That one's, yeah, it also seems decent, but, yeah. Chinese, so we don't use it. No, it's fine. It's just, yeah, I could. Yeah. It's just that there's a, it's not the only one. There's also a few others, which I.swyx [00:43:36]: The rest are, like, closed source, right? Like, Cling. Yeah.Comfy [00:43:39]: Closed source, there's a bunch of them. But I mean, open. I've seen a few of them. Like, I can't remember their names, but there's COG videos, the big, the big one. Then there's also a few of them that released at the same time. There's one that released at the same time as SSD 3.5, same day, which is why I don't remember the name.swyx [00:44:02]: We should have a release schedule so we don't conflict on each of these things. Yeah.Comfy [00:44:06]: I think SD 3.5 and Mochi released on the same day. So everything else was kind of drowned, completely drowned out. So for some reason, lots of people picked that day to release their stuff.Comfy [00:44:21]: Yeah. Which is, well, shame for those. And I think Omnijet also released the same day, which also seems interesting. Yeah. Yeah.Alessio [00:44:30]: What's Comfy? So you are Comfy. And then there's like, comfy.org. I know we do a lot of things for, like, news research and those guys also have kind of like a more open source thing going on. How do you work? Like you mentioned, you mostly work on like, the core piece of it. And then what...Comfy [00:44:47]: Maybe I should fade it in because I, yeah, I feel like maybe, yeah, I only explain part of the story. Right. Yeah. Maybe I should explain the rest. So yeah. So yeah. Basically, January, that's when the first January 2023, January 16, 2023, that's when Amphi was first released to the public. Then, yeah, did a Reddit post about the area composition thing somewhere in, I don't remember exactly, maybe end of January, beginning of February. And then someone, a YouTuber, made a video about it, like Olivio, he made a video about Amphi in March 2023. I think that's when it was a real burst of attention. And by that time, I was continuing to develop it and it was getting, people were starting to use it more, which unfortunately meant that I had first written it to do like experiments, but then my time to do experiments went down. It started going down, because people were actually starting to use it then. Like, I had to, and I said, well, yeah, time to add all these features and stuff. Yeah, and then I got hired by Stability June, 2023. Then I made, basically, yeah, they hired me because they wanted the SD-XL. So I got the SD-XL working very well withітhe UI, because they were experimenting withámphi.house.com. Actually, the SDX, how the SDXL released worked is they released, for some reason, like they released the code first, but they didn't release the model checkpoint. So they released the code. And then, well, since the research was related to code, I released the code in Compute 2. And then the checkpoints were basically early access. People had to sign up and they only allowed a lot of people from edu emails. Like if you had an edu email, like they gave you access basically to the SDXL 0.9. And, well, that leaked. Right. Of course, because of course it's going to leak if you do that. Well, the only way people could easily use it was with Comfy. So, yeah, people started using. And then I fixed a few of the issues people had. So then the big 1.0 release happened. And, well, Comfy UI was the only way a lot of people could actually run it on their computers. Because it just like automatic was so like inefficient and bad that most people couldn't actually, like it just wouldn't work. Like because he did a quick implementation. So people were forced. To use Comfy UI, and that's how it became popular because people had no choice.swyx [00:47:55]: The growth hack.Comfy [00:47:56]: Yeah.swyx [00:47:56]: Yeah.Comfy [00:47:57]: Like everywhere, like people who didn't have the 4090, they had like, who had just regular GPUs, they didn't have a choice.Alessio [00:48:05]: So yeah, I got a 4070. So think of me. And so today, what's, is there like a core Comfy team or?Comfy [00:48:13]: Uh, yeah, well, right now, um, yeah, we are hiring. Okay. Actually, so right now core, like, um, the core core itself, it's, it's me. Uh, but because, uh, the reason where folks like all the focus has been mostly on the front end right now, because that's the thing that's been neglected for a long time. So, uh, so most of the focus right now is, uh, all on the front end, but we are, uh, yeah, we will soon get, uh, more people to like help me with the actual backend stuff. Yeah. So, no, I'm not going to say a hundred percent because that's why once the, once we have our V one release, which is because it'd be the package, come fee-wise with the nice interface and easy to install on windows and hopefully Mac. Uh, yeah. Yeah. Once we have that, uh, we're going to have to, lots of stuff to do on the backend side and also the front end side, but, uh.Alessio [00:49:14]: What's the release that I'm on the wait list. What's the timing?Comfy [00:49:18]: Uh, soon. Uh, soon. Yeah, I don't want to promise a release date. We do have a release date we're targeting, but I'm not sure if it's public. Yeah, and we're still going to continue doing the open source, making MPUI the best way to run stable infusion models. At least the open source side, it's going to be the best way to run models locally. But we will have a few things to make money from it, like cloud inference or that type of thing. And maybe some things for some enterprises.swyx [00:50:08]: I mean, a few questions on that. How do you feel about the other comfy startups?Comfy [00:50:11]: I mean, I think it's great. They're using your name. Yeah, well, it's better they use comfy than they use something else. Yeah, that's true. It's fine. We're going to try not to... We don't want to... We want people to use comfy. Like I said, it's better that people use comfy than something else. So as long as they use comfy, I think it helps the ecosystem. Because more people, even if they don't contribute directly, the fact that they are using comfy means that people are more likely to join the ecosystem. So, yeah.swyx [00:50:57]: And then would you ever do text?Comfy [00:50:59]: Yeah, well, you can already do text with some custom nodes. So, yeah, it's something we like. Yeah, it's something I've wanted to eventually add to core, but it's more like not a very... It's a very high priority. But because a lot of people use text for prompt enhancement and other things like that. So, yeah, it's just that my focus has always been on diffusion models. Yeah, unless some text diffusion model comes out.swyx [00:51:30]: Yeah, David Holtz is investing a lot in text diffusion.Comfy [00:51:34]: Yeah, well, if a good one comes out, then we'll probably implement it since it fits with the whole...swyx [00:51:39]: Yeah, I mean, I imagine it's going to be a close source to Midjourney. Yeah.Comfy [00:51:43]: Well, if an open one comes out, then I'll probably implement it.Alessio [00:51:54]: Cool, comfy. Thanks so much for coming on. This was fun. Bye. Get full access to Latent Space at www.latent.space/subscribe

2 Characters and a Clown
Shawarmageddon...

2 Characters and a Clown

Play Episode Listen Later Dec 20, 2024 104:37


Send us a textThe boys dine on Chicken Shawarma.Jimmy and his dog have the Crud,  Johnny likes the Practical side of TV and RJ Still thinks Wraps are not food.Support the showhttps://2charactersandaclown.comhttps://www.buymeacoffee.com/2CandaC

Acasa La Maruta
CĂTĂLIN SCĂRLĂTESCU: POVESTEA CRUDĂ DESPRE SUCCES | PODCAST #180

Acasa La Maruta

Play Episode Listen Later Nov 11, 2024 127:07


CĂTĂLIN SCĂRLĂTESCU: POVESTEA CRUDĂ DESPRE SUCCES | PODCAST #180

Igreja Missionária Evangélica Maranata
Como ouvirão se não há quem pregue - Diác. Patrícia Crud

Igreja Missionária Evangélica Maranata

Play Episode Listen Later Nov 10, 2024 54:47


Como ouvirão se não há quem pregue - Diác. Patrícia Crud by Igreja Missionária Evangélica Maranata de Campo GrandePara conhecer mais sobre a Maranata: Instagram: https://www.instagram.com/imemaranata/Facebook: https://www.facebook.com/imemaranataSite: https://www.igrejamaranata.com.br/Canal do youtube: https://www.youtube.com/channel/UCa1jcJx-DIDqu_gknjlWOrQDeus te abençoe

AgostoAllElite Podcast
AgostoAllElite Podcast Ep 182: Crown Crud

AgostoAllElite Podcast

Play Episode Listen Later Nov 3, 2024 6:21


AgostoAllElite Podcast Ep 182: Crown CrudIn this week's episode of AgostoAllElite Podcast, we dive into our reviews of Raw, Smackdown, AEW Dynamite and AEW WrestleDream 2024.Let's break it all down on this week's episode of The AgostoAllElite Podcast

Essential Omnivore Podcast
335. What About When I'm Sick?

Essential Omnivore Podcast

Play Episode Listen Later Oct 31, 2024 14:08


I've got the CRUD. How do I lose weight now?   Get full show notes and more information here: https://www.luciahawley.com/podcast/335 FREEBIE: Lose 5 pounds in 5 weeks FREEBIE: Use my free calorie calculator FREEBIE: Over 25 balanced and satisfying snack combos you need in your back pocket  Connect with Lucia on Instagram  

AIN'T THAT SWELL
Blitzed: Our Sal and Cal go Hmaaaad in Torched Porch Chang Crud &... CALLING ALL TRADIES! ATS Live at Urban Surf is Go!

AIN'T THAT SWELL

Play Episode Listen Later Oct 8, 2024 42:32


Up - the financial Revolution that's got young Aussie's backs presents... CALLING ALL TRADIES: ATS and Surfers For Climate are throwing a monstro shindig at URBNSURF Sydney Nov 28 for all you hmaaaadcunts on the tools. Details are in the ep and tickets available here... https://surfersforclimate.org.au/products/tradeup-cup?variant=50272841105718&utm_source=Klaviyo&utm_medium=campaign&_kx=QZ1T6QQgY6Sz3LsYV989X4cPJhHKzpqk3o2pCZV5DweUzhoZc8p9lbV3D2RPU3k7.XRy5je Also the Portugal Chang was peeeeyeeeewer torture but at least Our Sal and Our Cal got right up em. Stace Galbraith joins Deadly to discuss.  Get on the Up Swellians!!! Download the ‘Up' app and sign up in minutes. Use code 'UTFS' for $10 on signup (do it all from the comfort of your phone, no need to go to the bank or any of that bullsh*t). T&C's @ up.com.auSee omnystudio.com/listener for privacy information.

All JavaScript Podcasts by Devchat.tv
Optimizing SQL and ORM Practices for High-Performance Applications - JSJ 650

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Sep 24, 2024 91:10


 In today's episode, Charles, Steve, and AJ, are joined by back-end engineer and team lead at Homebound, Stephen Haberman. We delve into the fascinating world of SQL c and its revolutionary approach to managing SQL queries with dedicated SQL files, delivering benefits such as reduced typing errors and pre-deployment checks. Stephen also walks us through the advantages and limitations of ORMs versus query builders like Prisma and Drizzle, sharing insights into Joyce ORM's unique philosophy and simplified CRUD operations.They explore the intricacies of Domain Driven Design (DDD), its emphasis on ubiquitous language, and how it shapes business logic and storage management. AJ contributes by discussing the potential of SQL c and Slonik for dynamic query building. Additionally, they discuss Steven's innovative work with GraphFileWorker and GrafAST, highlighting the performance improvements in GraphQL backends. Whether you're intrigued by the technicalities of ORMs, the evolution of database tools, or just love a good anecdote, this episode packed with technical insights and lively discussions is one you won't want to miss. Join them on this journey into the world of database management and development!SocialsLinkedIn: Stephen HabermanPicks AJ - TypeScript to JSDocAJ - MySQL to TypeScriptAJ - sqlcAJ - Slonik (Node + Postgres)AJ - SwiftUI EssentialsAJ - Introduction to SwiftUI AJ - Trump, but not saying dumb thingsCharles - Biblios | Board GameCharles - FreeStyle Libre 3 System | Continuous Glucose MonitoringStephen - Grafast | GrafastBecome a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.

Tony & Dwight
Coffee Cup Crud. Two-Legged Teddy. Boomer Logic. Bats Baseball. Rude Cities & Snake Farms.

Tony & Dwight

Play Episode Listen Later Sep 19, 2024 33:17 Transcription Available


Scrum Master Toolbox Podcast
CTO Series: A Masterclass in Product, Process, and Leadership | Alexander Grosse

Scrum Master Toolbox Podcast

Play Episode Listen Later Sep 2, 2024 42:41


CTO Series: Alexander Grosse Shares A Masterclass in Product, Process, and Leadership In this special BONUS episode from our CTO Series, we learn about leadership and product from Alexander Grosse, a seasoned professional with an impressive track record at companies like Nokia, SoundCloud, and BCG Digital Ventures. Currently serving as the Chief Product and Technology Officer (CPTO) at Veo, Alexander shares insights into his unique role, offering practical advice on how to structure teams, integrate product and engineering functions, and scale effectively during periods of rapid growth. Whether you're in the tech space or simply interested in effective leadership, this episode provides invaluable lessons on aligning product and tech for optimal results. The Evolution of the CPTO Role "Bridging the gap between product and technology is crucial to avoid conflicts and ensure everyone is aligned on the same objectives." Alexander begins by explaining the evolution of his role from an engineer to a CPTO, highlighting the challenges he faced in organizations where product and engineering were separated into silos. He discusses how this separation often led to conflicting incentives, ultimately stifling progress and innovation. By combining these roles under one umbrella, Alexander has been able to foster a more cohesive team, incentivized by the same goals and working in unison toward shared outcomes. "In a startup with limited runway, it's essential to have one team incentivized by the same numbers. This alignment dissolves conflicts and directs energy towards building the product." Leadership Through Interdisciplinary Collaboration "Moving from cross-disciplinary to interdisciplinary teams was a game changer—it shifted the mindset from individual ownership to shared responsibility." Reflecting on his career, Alexander shares a pivotal moment that redefined his approach to leadership—embracing interdisciplinary collaboration. Influenced by the principles in the book Change by Design, he emphasizes the importance of teams owning everything together rather than just their specific domains. This shift not only improved the innovation process at BCG Digital Ventures but also laid the foundation for his current leadership style at Veo. "Interdisciplinary teams own everything together, creating a culture where innovation thrives and leadership becomes a collective effort." Crafting a Unified Product and Tech Strategy "Shared ownership across disciplines ensures that both business and tech priorities are aligned, resulting in a more agile and responsive organization." As the CPTO, Alexander is responsible for both product strategy and the technology roadmap. He describes his process of fostering shared ownership among his teams, loosely following Agile principles to maintain flexibility and responsiveness. By working in short cycles, conducting regular retrospectives, and aligning product, engineering, and design, he has created a cohesive strategy that drives both innovation and business results. "Shared ownership and agile processes enable us to stay responsive to both product and business needs, ensuring that we're always moving in the right direction." Overcoming Challenges in Hardware and Software Integration "Combining commercial needs with incremental software releases requires a careful balancing act, especially in hardware-driven businesses." Alexander discusses the unique challenges of launching hardware products while maintaining agile software development cycles. He underscores the importance of risk management and cross-functional collaboration, particularly between supply chain, commercial, and product development teams. With a dedicated program manager for hardware releases and strategic use of firmware updates, Alexander navigates the complexities of integrating hardware and software in a fast-paced environment. "In hardware-dependent businesses, mastering risk management and aligning cross-functional teams are key to a successful launch." Fostering Collaboration Between Tech and Business Units "Aligning incentives between tech and commercial teams is crucial—what gets people to buy a product doesn't always keep them engaged." Collaboration between tech and business units is vital for success, and Alexander shares several strategies to enhance this partnership. From organizing workshops to aligning incentives, he emphasizes the need for close cooperation between departments. One of his key practices is making product teams accountable for churn, ensuring that they are directly tied to business outcomes and closely aligned with sales metrics. "Make your product team accountable for churn—it's the closest business number to sales metrics and fosters true alignment with the commercial side." The Impact of AI on the Future of Product Development "Tasks with sufficient training data will be taken over by AI, reshaping how we approach software and product development." Alexander shares his thoughts on the rise of AI and its potential to transform software and product development. He predicts that AI will take over routine tasks, like CRUD operations, allowing developers to focus on more complex and creative aspects of product development. He also highlights the current use of AI in querying data sets, pointing to its growing influence in everyday business operations. "AI is set to take over routine development tasks, pushing us to focus on innovation and higher-level problem-solving." Recommended Reading for CPTOs The book Change by Design has been a significant influence on Alexander's approach to his role as CPTO. He recommends it as essential reading for anyone looking to bridge the gap between product and technology, offering a framework for creating interdisciplinary teams and fostering innovation. "Change by Design was the missing puzzle piece for creating a 'one team' approach—it's a must-read for any CPTO." [IMAGE HERE] Recovering from failure, or difficult moments is a critical skill for Scrum Masters. Not only because of us, but also because the teams, and stakeholders we work with will also face these moments! We need inspiring stories to help them, and ourselves! The Bungsu Story, is an inspiring story by Marcus Hammarberg which shows how a Coach can help organizations recover even from the most disastrous situations! Learn how Marcus helped The Bungsu, a hospital in Indonesia, recover from near-bankruptcy, twice! Using Lean and Agile methods to rebuild an organization and a team! An inspiring story you need to know about! Buy the book on Amazon: The Bungsu Story - How Lean and Kanban Saved a Small Hospital in Indonesia. Twice. and Can Help You Reshape Work in Your Company.     About Alexander Alexander is a seasoned professional with a rich background in major corporations like Nokia, as well as experience with startups, and being a corporate venture builder at BCG Digital Ventures. Currently the Chief Product and Technology Officer at Veo, he invests in early-stage startups and offers expert advice, leveraging his experience as co-author of the O'Reilly book 'Scaling Teams'. You can link with Alexander Grosse on LinkedIn.

WSKY The Bob Rose Show
Bugged by chicken Dems, car wash tech, changing lib border policies, drive-by shootings, cell-phone disconnect and summer crud

WSKY The Bob Rose Show

Play Episode Listen Later Aug 19, 2024 7:24


The Monday “What's Buggin' You” segment on the Bob Rose Show 8-19-24

Crud Talk
"KEEP THE CHANGE" - Crud Talk E182

Crud Talk

Play Episode Listen Later Aug 12, 2024 19:53


THERE'S LOTS of CHANGES going on. No doubt…there are many of you going through BIG CHANGES. I don't know if you're like me…I  want things to change…but when things actually CHANGE…it's nothing like I thought it would be. And I get ticked! So…How do YOU feel about CHANGE? How do you handle it? Do you love it…or do you hate it? You might say, “This is NOT…good. I don't like what's happening. I don't like this change in my life. This is NOT what I wanted.” I get it! Every change is an education. And how we handle the changes that come, can make us bitter...or better. We're talking about it on tonights episode..."KEEP THE CHANGE" - Crud Talk E182 www.sonyabrunner.com Sonya Brunner - Fifty Shades of Grace (Facebook page) @sonya4him - Instagram @SONYAcruddealer - TikTok Sonya (Fifty Shades of Grace, host of Crud Talk podcast) Brunner - Linkedin  Got Crud? Yep. We ALL do. It's how we deal with it (or not) that makes all the difference. Crud Talk is a podcast that helps people deal with their crud. Are you stuck? I can help. *Message me for package details. Do YOU have an event? Do you need to encourage your audience with a POWERFUL story? I'd love to be a part of it! *Message me for speaking availability! #crudtalk #podcast #dealwithyourcrud #fiftyshadesofgrace #hope #love #peace #howtodealwithchange #seuxalabuse #JesusChangesEverything #faith #healing #speaker #truth #bible #relationships #forgiveness #Jesus #sin #childabuse #sonyabrunner #changes #lifechanges #howtohandlechange

CrabDiving Radio Podcast
CrabDiving – Fri 080924 – Trump’s Foul Coalition of Crud Is Fracturing

CrabDiving Radio Podcast

Play Episode Listen Later Aug 10, 2024 119:31


Check out CrabDiving radio Friday!

Crud Talk
"STRIPPED" - Crud Talk E181

Crud Talk

Play Episode Listen Later Jul 1, 2024 20:00


What do you think about when you hear the word "STRIPPED"? We hear the word STRIPPED and most of us think something negative or bad when is happening. But when Jesus has STRIPPED something away from us…it's ALWAYS for our good. ALWAYS. The reason that it's so painful…is because we've been holding on to it…so tightly. But what if the greatest blessing is coming...when that OLD thing...which no longer has a purpose...is stripped away and now we are FREE to step into a NEW purpose? Could it be that the greatest blessing IS being STRIPPED? We're talking about it tonight on Crud Talk. "STRIPPED" - Crud Talk E181 www.sonyabrunner.com Sonya Brunner - Fifty Shades of Grace (Facebook page) @sonya4him - Instagram @SONYAcruddealer - TikTok Sonya (Fifty Shades of Grace, host of Crud Talk podcast) Brunner - Linkedin  Got Crud? Yep. We ALL do. It's how we deal with it (or not) that makes all the difference. Crud Talk is a podcast that helps people deal with their crud. Are you stuck? I can help. *Message me for package details. Do YOU have an event? Do you need to encourage your audience with a POWERFUL story? I'd love to be a part of it! *Message me for speaking availability! #crudtalk #podcast #dealwithyourcrud #fiftyshadesofgrace #hope #love #askingforhelp #seuxalabuse #JesusChangesEverything #faith #healing #speaker #truth #bible #relationships #forgiveness #Jesus #sin #childabuse #sonyabrunner #stripped #anger

Booze and B-Movies
S1E29: Steele Justice/They Drew First Crud

Booze and B-Movies

Play Episode Listen Later Jun 9, 2024 35:03


John Steele comes home after a variety of oonflict, both war-related and interpersonal, during the Vietnam war. Emotional damage that doesn't really come across due to leading man Martin Kove's resting confused face has led Steele to lose his wife, his career and his sobriety. When his BFF and former Army partner, Lee, is killed by an old war-era nemesis, "Steele is Back" and unleashed on a Vietnamese crime syndicate. A million movie tropes mash-up to create this Rambo/Commando/Die Hard combo, except with Martin Kove rather than Stallone/Schwarzenegger/Willis. Bodies by Jake. Lots of guns. A cool Gatlin gun truck. A rubber Coral snake on a fishing line. Steele Justice final grade: Steve calls this a fine movie, inoffensive and watchable by anyone who enjoys 80's era action. Pretty standard Reagan-era USA! USA! hero action. 3.41/5.0 Brandon says Steele Justice is a perfectly OK movie to watch. Basically the same storyline and production quality of every Stallone/Schwarzenegger/Norris movie of the 1980s. Unfortunately, Martin Kove can't carry a movie like those other guys. 3.00/5 Cocktail of the Week: Steele Justice 1 1/2 oz Tequila 1/2 oz Light Rum 2 oz Fresh-Squeezed Grapefruit 1 oz Fresh-Squeezed Lime Juice 1 oz oz Honey Syrup 1/2 oz Egg White Combine all ingredients in a cocktail shaker without ice. Dry shake a few times. Add ice and shake again. Double strain (all about that mouth feel) over ice in a rocks glass. Garnish with a grapefruit wedge. Cocktail Grade: A great way to enjoy a grapefruit without a partner. A tasty sour that packs a little punch. One of them is good. Two are really good. 4.69/5 ------------------ Contact us with feedback or cocktail/movie recommendations to: boozeandbmovies@gmail.com X: @boozeandbmovies Instagram: @boozeandbmovies Threads: @boozeandbmovies www.facebook.com/boozeandbmovies --- Send in a voice message: https://podcasters.spotify.com/pod/show/boozeandbmovies/message Support this podcast: https://podcasters.spotify.com/pod/show/boozeandbmovies/support

Let's Unpack That
Episode 40: Corporate Crud

Let's Unpack That

Play Episode Listen Later Jun 6, 2024 58:20


Let's circle back on that. Let's put a pin in it. That's above my pay grade. It's important we move the needle. What do these phrases all have in common? Well they are corporate America's way for fuck you” - so that's exactly what we're going to do. That's right, today we're diving into corporate America - something you may be surprised to learn we all 6 are a part of. No, we are not full time podcasters but instead corporate political cogs on the hellscape corporate America chain. Bad bosses - we know them, catty colleges - we are them, hopeless HR - have you met Maggie: we're going to get to it all. ​​ Hosted by Erica (@erica_megan), Kirk (@kirk.charles), Andrew (@andrwjn) Produced by Maggie (@maggiirosee)

Podcastul de istorie
#150 - Macrinus

Podcastul de istorie

Play Episode Listen Later May 27, 2024 109:07


Cu Caracalla mort în Siria, imperiul e din nou în criză. Fără moștenitor desemnat, fără un pretendent clar la tron, imperiul e din nou în mare pericol de un nou război civil. Două elemente diferă, de data asta, însă. Primul: majoritatea trupelor sunt concentrate în Siria, pentru campaniile pe care le plănuia Caracalla. Al doilea: Caracalla și-a eliminat orice potențial adversar sau pretendent când a epurat tabăra lui Geta. Crud?

Dicey Situations
"Where Are My Friends?"

Dicey Situations

Play Episode Listen Later May 21, 2024 54:33


This week on Dicey Situations, Battle of the Bands kicks off, Crud debuts his improved wares and a mysterious new band gives a performance the gang will never forget.This episode contains violence, profanity and crude humor.Follow us on X(Twitter) @diceypodFollow us on Instagram https://www.instagram.com/diceysituationspod/We also have a subreddit https://www.reddit.com/r/Dicey_Situations/DM - Ryan StemmlerTooluuga Ploopploopleen - Harout KhodaverdianGlebnor Glekilak Ploopploopleen - Mark DePippoTheo Justice - Richard PowerKosmo Kilroy - Eric PowerProduced and Edited by Chris Romagna, Ryan Stemmler & Mark DePippoMusic by Eric Power and Monument StudiosContact us diceysituationspod@gmail.com

Soft Skills Engineering
Episode 409: Fancy title to IC and CRUD is crud

Soft Skills Engineering

Play Episode Listen Later May 20, 2024 28:27


In this episode, Dave and Jamison answer these questions: Listener Shayne asks, ‌ I'm about to start a new gig after 8+ years at a company. I was an early employee at the current company and have accumulated a lot of responsibility, influence, and a fancy title. I'll be an IC at my new company (also very early stage) but the most senior engineer second only to the CTO. What are some tips for this transition? How can I onboard well? How do I live up to my “seniorness” in the midst of learning a new code base, tech stack, and product sector? I managed to stay close to the code despite adding managerial responsibilities in my current role, so I'm not worried about the IC work. I really want to make sure that I gel with my new teammates, that I'm able to add valuable contributions ASAP, and that folks learn that they can rely on my judgement when making tradeoffs in the code or the product. Halp! I got into software development to become a game developer. Once I became a software developer, I found out I really enjoyed the work. My wife and I joined a game jam (lasting 10 days) over the weekend. I very quickly have realized how passionate and excited I get about game development again! But this has led to a problem - I would much rather be doing that. I find myself moving buttons around or making another CRUD end point a means to an end now, thinking about how I much rather be creating exciting experiences. How can I handle this? Quitting my job to pursue a pipe dream just isn't feasible.

Crud Talk
"Easter CRUD Hunt" - E174

Crud Talk

Play Episode Listen Later Apr 1, 2024 21:57


What does Easter mean to you? The Easter Bunny...the fancy dresses, family dinners...and Easter Egg Hunts. EGGS can be in all different forms. Hard boiled. Raw. Rotten. All have a shell. All have a yoke. All are fragile. EVERY EGG MUST BREAK in order to be USED. Same with us. In other words…until we're willing to break open those areas that we've been holding on to…OUR CRUD…we remain STUCK and stagnant…useless. Maybe even ROTTEN…and without our God-given purpose. What if we've REJECTED the very meaning of this day? Got some EASTER crud??? Let's go on an EASTER CRUD HUNT. Easter CRUD Hunt - E174 www.sonyabrunner.com Sonya Brunner - Fifty Shades of Grace (Facebook page) @sonya4him - Instagram @cruddealer - TikTok Sonya (The CRUD Dealer) Brunner - Linkedin  Got Crud? Yep. We ALL do. It's how we deal with it (or not) that makes all the difference. Crud Talk is a podcast that helps people deal with their crud. Are you stuck? I can help. *Message me for package details. #crudtalk #podcast #dealwithyourcrud #Easter #fiftyshadesofgrace #hope #love #rejection #JesusChangesEverything #lifecoach #faith #speaker #truth #bible #relationships #forgiveness #Jesus #easter #Easteregghunt #sin #salvation

Crud Talk
"TRASH TALK" - Crud Talk E173

Crud Talk

Play Episode Listen Later Mar 18, 2024 21:14


Are you a TRASH TALKER? How do we speak to people? Do we talk trash or are we kind and considerate? But words can be a POWERFUL weapon. I'm a person who has NO problem with words. I try to make my words to others positive and encouraging. But what about how I talk to MYSELF? I never thought of myself as a TRASH TALKER. But I think I might be THE worst. What about you? We're talking about it on this episode of Crud Talk. "TRASH TALK" - Crud Talk E173 www.sonyabrunner.com Sonya Brunner - Fifty Shades of Grace (Facebook page) @sonya4him - Instagram @cruddealer - TikTok Sonya (The CRUD Dealer) Brunner - Linkedin  Got Crud? Yep. We ALL do. It's how we deal with it (or not) that makes all the difference. Crud Talk is a podcast that helps people deal with their crud. Are you stuck? I can help. *Message me for package details. #crudtalk #podcast #dealwithyourcrud #trashtalk #fiftyshadesofgrace #hope #love #negative #selflove #JesusChangesEverything #lifecoach #faith #speaker #truth #bible #relationships #positive #encouragment #selfhatred #forgiveness #Jesus

Hillbilly Nerd Talk
HNT 266: Old People Rant about the Creeping Crud. Vaccines. Social Media. Gila Monster. Neurolink.

Hillbilly Nerd Talk

Play Episode Listen Later Feb 28, 2024 62:31


Old People Rant about the Creeping Crud. Vaccines. Social Media. Gila Monster. Neurolink.

Firebreathing Kittens
Trailer for You Are What You Eat

Firebreathing Kittens

Play Episode Listen Later Feb 14, 2024 2:03


Join Bart, Reg, and Crud as they crab walk their way around an island to save people from an impending tsunami! You Are What You Eat is an actual play podcast of Everything's Going to Crab! 

Firebreathing Kittens
You Are What You Eat (Everything's Going To Crab)

Firebreathing Kittens

Play Episode Listen Later Feb 14, 2024 156:34


Join Bart, Reg, and Crud as they crab walk their way around an island to save people from an impending tsunami! You Are What You Eat is an actual play podcast of Everything's Going to Crab! 

Tell Me Somethin' Good!
202. It's Time to Move Past the Crud

Tell Me Somethin' Good!

Play Episode Listen Later Jan 30, 2024 15:37


In this episode of the Tell Me Somethin' Good podcast, Clint Swindall talks about something we all experience — crud. No matter how hard we try, we'll all face life's challenges. Clint challenges listeners to consider how they handle it and encourages them to do two things to get past the crud. Check it out! ---------- If you like the podcast, you'll love the Tell Me Somethin' Good! book. Check it out: Tell Me Somethin' Good! - https://www.tinyurl.com/yxcsg3sh ---------- Have Clint bring his message of positivity to your organization, either in person or virtually. Check out his Speaker Video   ---------- Follow me: Twitter: https://www.twitter.com/clintswindall Instagram: https://www.instagram.com/tmsg_clintswindall/ Facebook: https://www.facebook.com/clintswindall2 YouTube: https://www.youtube.com/c/clintswindall LinkedIn: https://www.linkedin.com/in/clint-swindall-csp-9047174/ ---------- Part of the Win Make Give Podcast Network

Sore Losers
The Convention Crud Has Hit The Head Coaches

Sore Losers

Play Episode Listen Later Jan 19, 2024 47:23 Transcription Available


Coaches Convention 3 refuses to end as someone brought some germs and now the head coaches are starting feel it. Michael Jordan didn't sit out when he had the flu so Ray and Lunchbox decided to suck it up for a banger. Pitts has come out of hiding to talk about Patrick Mahomes and the Kansas City Chiefs! Can the Bills finally beat the Chiefs? Are the Texans a team of destiny? Can Baker Mayfield and the Boys end Detroits magical season? Not sure what else we talked about. See omnystudio.com/listener for privacy information.

Sunday Morning Coming Down
Episode 195: Episode 195 Sunday Morning Coming Down: NEW Year’s, the Crud, and Dog Puzzles.

Sunday Morning Coming Down

Play Episode Listen Later Dec 31, 2023 33:17


Riggins' life has taken a dramatic turn for the worse, John breaks down his one word for 2024, and once again provides his genuine appreciation for the farmer's blow. 

Screaming in the Cloud
How MongoDB is Paving The Way for Frictionless Innovation with Peder Ulander

Screaming in the Cloud

Play Episode Listen Later Nov 30, 2023 36:08


Peder Ulander, Chief Marketing & Strategy Officer at MongoDB, joins Corey on Screaming in the Cloud to discuss how MongoDB is paving the way for innovation. Corey and Peder discuss how Peder made the decision to go from working at Amazon to MongoDB, and Peder explains how MongoDB is seeking to differentiate itself by making it easier for developers to innovate without friction. Peder also describes why he feels databases are more ubiquitous than people realize, and what it truly takes to win the hearts and minds of developers. About Peder Peder Ulander, the maestro of marketing mayhem at MongoDB, juggles strategies like a tech wizard on caffeine. As the Chief Marketing & Strategy Officer, he battles buzzwords, slays jargon dragons, and tends to developers with a wink. From pioneering Amazon's cloud heyday as Director of Enterprise and Developer Solutions Marketing to leading the brand behind cloud.com's insurgency, Peder's built a legacy as the swashbuckler of software, leaving a trail of market disruptions one vibrant outfit at a time. Peder is the Scarlett Johansson of tech marketing — always looking forward, always picking the edgy roles that drive what's next in technology.Links Referenced:MongoDB: https://mongodb.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode of Screaming in the Cloud is brought to us by my friends and yours at MongoDB, and into my veritable verbal grist mill, they have sent Peder Ulander, their Chief Marketing Officer. Peder, an absolute pleasure to talk to you again.Peder: Always good to see you, Corey. Thanks for having me.Corey: So, once upon a time, you worked in marketing over at AWS, and then you transitioned off to Mongo to, again, work in marketing. Imagine that. Almost like there's a narrative arc to your career. A lot of things change when you change companies, but before we dive into things, I just want to call out that you're a bit of an aberration in that every single person that I have spoken to who has worked within your org has nothing but good things to say about you, which means you are incredibly effective at silencing dissent. Good work.Peder: Or it just shows that I'm a good marketer and make sure that we paint the right picture that the world needs to see.Corey: Exactly. “Do you have any proof of you being a great person to work for?” “No, just word of mouth,” and everyone, “Ah, that's how marketing works.”Peder: Exactly. See, I'm glad you picked up somewhere.Corey: So, let's dive into that a little bit. Why would you leave AWS to go work at Mongo. Again, my usual snark and sarcasm would come up with a half dozen different answers, each more offensive than the last. Let's be serious for a second. At AWS, there's an incredibly powerful engine that drives so much stuff, and the breadth is enormous.MongoDB, despite an increasingly broad catalog of offerings, is nowhere near that level of just universal applicability. Your product strategy is not a Post-It note with the word ‘yes' written on it. There are things that you do across the board, but they all revolve around databases.Peder: Yeah. So, going back prior to MongoDB, I think you know, at AWS, I was across a number of different things, from the developer ecosystem, to the enterprise transformation, to the open-source work, et cetera, et cetera. And being privy to how customers were adopting technology to change their business or change the experiences that they were delivering to their customers or increase the value of the applications that they built, you know, there was a common thread of something that fundamentally needed to change. And I like to go back to just the evolution of tech in that sense. We could talk about going from physical on-prem systems to now we're distributed in the cloud. You could talk about application constructs that started as big fat monolithic apps that moved to virtual, then microservices, and now functions.Or you think about networking, we've gone from fixed wire line, to network edge, and cellular, and what have you. All of the tech stack has changed with the exception of one layer, and that's the data layer. And I think for the last 20 years, what's been in place has worked okay, but we're now meeting this new level of scale, this new level of reach, where the old systems are not what's going to be what the new systems are built on, or the new experiences are built on. And as I was approached by MongoDB, I kind of sat back and said, “You know, I'm super happy at AWS. I love the learning, I love the people, I love the space I was in, but if I were to put my crystal ball together”—here's a Bezos statement of looking around corners—“The data space is probably one of the biggest spaces ripe for disruption and opportunity, and I think Mongo is in an incredible position to go take advantage of that.”Corey: I mean, there's an easy number of jokes to make about AmazonBasics MongoDB, which is my disparaging name for their DocumentDB first-party offering. And for a time, it really felt like AWS's perspective toward its partners was one of outright hostility, if not antagonism. But that narrative no longer holds true in 2023. There's been a definite shift. And to be direct, part of the reason that I believe that is the things you have said both personally and professionally in your role as CMO of Mongo that has caused me to reevaluate this because despite all of your faults—a counted list of which I can provide you after the show—Peder: [laugh].Corey: You do not say things that you do not believe to be true.Peder: Correct.Corey: So, something has changed. What is it?Peder: So, I think there's an element of coopetition, right? So, I would go as far as to say the media loved to sensationalize—actually even the venture community—loved to sensationalize the screen scraping stripping of open-source communities that Amazon represented a number of years ago. The reality was their intent was pretty simple. They built an incredibly amazing IT stack, and they wanted to run whatever applications and software were important to their customers. And when you think about that, the majority of systems today, people want to run open-source because it removes friction, it removes cost, it enables them to go do cool new things, and be on the bleeding edge of technology.And Amazon did their best to work with the top open-source projects in the world to make it available to their customers. Now, for the commercial vendors that are leaning into this space, that obviously does present itself threat, right? And we've seen that along a number of the cohorts of whether you want to call it single-vendor open-source or companies that have a heavy, vested interest in seeing the success of their enterprise stack match the success of the open-source stack. And that's, I think, where media, analysts, venture, all kind of jumped on the bandwagon of not really, kind of, painting that bigger picture for the future. I think today when I look at Amazon—and candidly, it'll be any of the hyperscalers; they all have a clone of our database—it's an entry point. They're running just the raw open-source operational database capabilities that we have in our community edition and making that available to customers.We believe there's a bigger value in going beyond just that database and introducing, you know, anything from the distributed zones to what we do around vector search to what we do around stream processing, and encryption and all of these advanced features and capabilities that enable our customers to scale rapidly on our platform. And the dependency on delivering that is with the hyperscalers, so that's where that coopetition comes in, and that becomes really important for us when we're casting our web to engage with some of the world's largest customers out there. But interestingly enough, we become a big drag of services for an AWS or any of the other hyperscalers out there, meaning that for every dollar that goes to a MongoDB, there's, you know, three, five, ten dollars that goes to these hyperscalers. And so, they're very active in working with us to ensure that, you know, we have fair and competing offers in the marketplace, that they're promoting us through their own marketplace as well as their own channels, and that we're working together to further the success of our customers.Corey: When you take a look at the exciting things that are happening at the data layer—because you mentioned that we haven't really seen significant innovation in that space for a while—one of the things that I see happening is with the rise of Generative AI, which requires very special math that can only be handled by very special types of computers. I'm seeing at least a temporary inversion in what has traditionally been thought of as data gravity, whereas it's easier to move compute close to the data, but in this case, since the compute only lives in the, um, sparkling us-east-1 regions of Virginia, otherwise, it's just generic, sparkling expensive computers, great, you have to effectively move the mountain to Mohammed, so to speak. So, in that context, what else is happening that is driving innovation in the data space right now?Peder: Yeah, yeah. I love your analogy of, move the mountain of Mohammed because that's actually how we look at the opportunity in the whole Generative AI movement. There are a lot of tools and capabilities out there, whether we're looking at code generation tools, LLM modeling vendors, some of the other vector database companies that are out there, and they're all built on the premise of, bring your data to my tool. And I actually think that's a flawed strategy. I think that these are things that are going to be features in core application databases or operational databases, and it's going to be dependent on the reach and breadth of that database, and the integrations with all of these AI tools that will define the victor going forward.And I think that's been a big core part of our platform. When we look at Atlas—111 availability zones across all three hyperscalers with a single, unified, you know, interface—we're actually able to have the customers keep their operational data where it's most important to them and then apply the tools of the hyperscalers or the partners where it makes the most sense without moving the data, right? So, you don't actually have to move the mountain to Mohammed. We're literally building an experience where those that are running on MongoDB and have been running on MongoDB can gain advantage of these new tools and capabilities instantly, without having to change anything in their architectures or how they're building their applications.Corey: There was a somewhat over-excited… I guess, over-focus in the space of vector databases because whatever those are—which involves math, and I am in no way shape, or form smart enough to grasp the nuances thereof, but everyone assures me that it's necessary for Generative AI and machine learning and yadda, yadda, yadda. So, when in doubt, when I'm confronted by things I don't fully understand, I turn to people who do. And the almost universal consensus that I have picked up from people who track databases for a living—as opposed to my own role of inappropriately using everything in the world except databases as a database—is that vector is very much a feature, not a core database type.Peder: Correct. The best way to think about it—I mean, databases in general, they're dealing with structured and unstructured data, and generally, especially when you're doing searches or relevance, you're limited to the fact that those things in the rows and the columns or in the documents is text, right? And the reality is, there's a whole host of information that can be found in metadata, in images, in sounds, in all of these other sources that were stored as individual files but unsearchable. Vector, vectorization, and vector embeddings actually enable you to take things far beyond the text and numbers that you traditionally were searching against and actually apply more, kind of, intelligence to it, or apply sounds or apply sme—you know, you can vectorize smells to some extent. And what that does is it actually creates a more pleasing slash relevant experience for how you're actually building the engagements with your customers.Now, I'll make it a little more simple because that was trying to define vectors, which as you know, is not the easiest thing. But imagine being able to vectorize—let's say I'm a car company—we're actually working with a car company on this—and you're able to store all of the audio files of cars that are showing certain diagnostic issues—the putters and the spurts and the pings and the pangs—and you can actually now isolate these sounds and apply them directly to the problem and resolution for the mechanics that are working on them. Using all of this stuff together, now you actually have a faster time to resolution. You don't want mechanics knowing the mechanics of vectors in that sense, right, so you build an application that abstracts all of that complexity. You don't require them to go through PDFs of data and find all of the options for fixing this stuff.The relevance comes back and says, “Yes, we've seen that sound 20 times across this vehicle. Here's how you fix it.” Right? And that cuts significant amount of time, cost, efficiency, and complexity for those auto mechanics. That is such a big push forward, I think, from a technology perspective, on what the true promise of some of these new capabilities are, and why I get excited about what we're doing with vector and how we're enabling our customers to, you know, kind of recreate experiences in a way that are more human, more relevant.Corey: Now, I have to say that of course you're going to say nice things about your capabilities where vector is concerned. You would be failing in your job if you did not. So, I feel like I can safely discount every positive thing that you say about Mongo's positioning in the vector space and instead turn to, you know, third parties with no formalized relationship with you. Yesterday, Retool's State of AI report came across my desk. I am a very happy Retool customer. They've been a periodic sponsor, from time-to-time, of my ridiculous nonsense, which is neither here nor there, but I want to disclaim the relationship.And they had a Gartner Magic Quadrant equivalent that on one axis had Net Promoter Score—NPS, which is one of your people's kinds of things—and the other was popularity. And Mongo was so far up and to the right that it was almost hilarious compared to every other entrant in the space. That is a positioning that I do not believe it is possible to market your way into directly. This is something that people who are actually doing these things have to use the product, and it has to stand up. Mongo is clearly effective at doing this in a way that other entrants aren't. Why?Peder: Yeah, that's a good question. I think a big part of that goes back to the earlier statement I made that vector databases or vector technology, it's a feature, it's not a separate thing, right? And when I think about all of the new entrants, they're creating a new model where now you have to move your data out of your operational database and into their tool to get an answer and then push back in. The complexity, the integrations, the capabilities, it just slows everything down, right? And I think when you look at MongoDB's approach to take this developer data platform vision of getting all of the core tools that developers need to build compelling applications with from a data perspective, integrating it into one seamless experience, we're able to basically bring classic operational database capabilities, classic text search type capabilities, embed the vector search capabilities as well, it actually creates a richer platform and experience without all of that complexity that's associated with bolt-on sidecar Gen AI tool or vector database.Corey: I would say that that's one of those things that, again, can only really be credibly proven by what the market actually does, as opposed to, you know, lip-sticking the heck out of a pig and hoping that people don't dig too deeply into what you're saying. It's definitely something we're seeing adoption of.Peder: Yeah, I mean, this kind of goes to some of the stuff, you know, you pointed out, the Retool thing. This is not something you can market your way into. This is something that, you know, users are going to dictate the winners in this space, the developers, they're going to dictate the winners in the space. And so, what do you have to do to win the hearts and minds of developers, you have to make the tech extremely approachable, it's got to be scalable to meet their needs, not a lot of friction involved in learning these new capabilities and applying it to all of the stuff that has come before. All of these things put together, really focusing on that developer experience, I mean, that goes to the core of the MongoDB ethos.I mean, this is who we were when we started the company so long ago, and it's continued to drive the innovation that we do in the platform. And I think this is just yet again, another example of focusing on developer needs, making it super engaging and useful, removing the friction, and enabling them to just go create new things. That's what makes it so fun. And so when, you know, as a marketer, and I get the Retool chart across my desk, we haven't been pitching them, we haven't been marketing to them, we haven't tried to influence this stuff, so knowing that this is a true, unbiased audience, actually is pretty cool to see. To your point, it was surprising how far up and to the right that we sat, given, you know, where we were in just—we launched this thing… six months ago? We launched it in June. The amount of customers that have signed up, are using it, and engaged with us on moving forward has been absolutely amazing.Corey: I think that there has been so much that gets lost in the noise of marketing. My approach has always been to cut through so much of it—that I think AWS has always done very well with—is—almost at their detriment these days—but if you get on stage, you can say whatever you want about your company's product, and I will, naturally and lovingly, make fun of whatever it is that you say. But when you have a customer coming on stage and saying, “This is how we are using the thing that they have built to solve a very specific business problem that was causing us pain,” then I shut up, and I listen because it's very hard to wind up dismissing that without being an outright jerk about things. I think the failure mode of that is, taken too far, you lose the ability to tell your own story in a coherent way, and it becomes a crutch that becomes very hard to get rid of. But the proof is really in the pudding.For me, like, the old jokes about—in the early teens—where MongoDB would periodically lose data as configured by default. Like, “MongoDB. It's Snapchat for databases.” Hilarious joke at the time, but it really has worn thin. That's like being angry about what Microsoft did in 2005 and 2006. It's like, “Yeah, okay, you have a point, but it is also ancient history, and at some point you need to get with the modern era, get with the program.”And I think that seeing the success and breadth of MongoDB that I do—you are in virtually every customer that I talk to, in some way, shape, or form—and seeing what it is that they're doing with you folks, it is clear that you are not a passing fad, that you are not going away anytime soon.Peder: Right.Corey: And even with building things in my spare time and following various tutorials of dubious credibility from various parts of the internet—as those things tend to go—MongoDB is very often a default go-to reference when someone needs a database for which a SQLite file won't do.Peder: Right. It's fascinating to see the evolution of MongoDB, and today we're lucky to track 45,000-plus customers on our platform doing absolutely incredible things. But I think the biggest—to your point—the biggest proof is in the pudding when you get these customers to stand up on stage and talk about it. And even just recently, through our .local series, some of the customers that we've been highlighting are doing some amazing things using MongoDB in extremely business-critical situations.My favorite was, I was out doing our .local in Hong Kong, where Cathay Pacific got up on stage, and they talked a little bit about their flight folder. Now, if you remember going through the airport, you always see the captains come through, and they had those two big boxes of paperwork before they got onto the plane. Not only was that killing the environment with all the trees that got cut down for it, it was cumbersome, complex, and added a lot of time and friction with regards to flight operations. Now, take that from a single flight over all of the fleet that's happening across the world.We were able to work with Cathay Pacific to digitize their entire flight folder, all of their documentation, removing the need for cutting down trees and minimizing a carbon footprint form, but at the same time, actually delivering a solution where if it goes down, it grounds the entire fleet of the airline. So, imagine that. That's so business-critical, mission-critical, has to be there, reliable, resilient, available for the pilots, or it shuts down the business. Seeing that growth and that transformation while also seeing the environmental benefit for what they have achieved, to me, that makes me proud to work here.Similarly, we have companies like Ford, another big brand-name company here in the States, where their entire connected car experience and how they're basically operationalizing the connection between the car and their home base, this is all being done using MongoDB as well. So, as they think of these new ideas, recognizing that things are going to be either out at the edges or at a level of scale that you can't just bring it back into classic rows and columns, that's actually where we're so well-suited to grow our footprint. And, you know, I remember back to when I was at Sun—Sun Microsystems. I don't know if anybody remembers that company. That was an old one.But at one point, it was Jonathan that said, “Everything of value connects to the network.” Right? Those things that are connecting to the network also need applications, they need data, they need all of these services. And the further out they go, the more you need a database that basically scales to meet them where they are, versus trying to get them to come back to where your database happens to sit. And in order to do that, that's where you break the mold.That's where—I mean, that kind of goes into the core ethos of why we built this company to begin with. The original founders were not here to build a database; they were building a consumer app that needed to scale to the edges of the earth. They recognized that databases didn't solve for that, so they built MongoDB. That's actually thinking ahead. Everything connecting to the network, everything being distributed, everything basically scaling out to all the citizens of the planet fundamentally needs a new data layer, and that's where I think we've come in and succeeded exceptionally well.Corey: I would agree. Another example I like to come up with, and it's fun that the one that leaps to the top of my mind is not one of the ones that you mentioned, but HSBC—the massive bank—very publicly a few years ago, wound up consolidating, I think it was 46 relational databases onto MongoDB. And the jokes at the time wrote themselves, but let's be serious for a second. Despite the jokes that we all love to tell, they are a bank, a massive bank, and they don't play fast-and-loose or slap-and-tickle with transactional integrity or their data stores for these things.Because there's a definite belief across the banking sector—and I know this having worked in it myself for years—that if at some point, you have the ATMs spitting out the wrong account balances, people will begin rioting in the streets. I don't know if that's strictly accurate or hyperbole, but it's going to cause massive amounts of chaos if it happens. So, that is something that absolutely cannot happen. The fact that they're willing to engage with you folks and your technology and be public about it at that scale, that's really all you need to know from a, “Is this serious technology or clown shoes technology?”Peder: [laugh]. Well, taking that comment, now let's exponentially increase that. You know, if I sit back, and I look at my customer base, financial services is actually one of our biggest verticals as a business. And you mentioned HSBC. We had Wells Fargo on the stage last year at our world event.Nine out of the top ten world's banks are using MongoDB in some of their applications, some at the scale of HSBC, some are still just getting started. And it all comes down to the fact that we have proven ourselves, we are aligned to mission-critical business environments. And I think when it comes down to banks, especially that transactional side, you know, building in the capabilities to be able to have high frequency transactions in the banking world is a hard thing to go do, and we've been able to prove it with some of the largest banks on the planet.Corey: I also want to give you credit—although it might be that I'm giving you credit for a slow release process; I hope not—but when I visit mongodb.com, it still talks up front that you are—and I want to quote here—oh, good lord, it changes every time I load the page—but it talks about, “Build faster, build smarter,” on this particular version of the load. It talks about the data platform. You have not effectively decided to pivot everything you say in public to tie directly into the Generative AI hype bubble that we are currently experiencing. You have a bunch of different use cases, and you're not suddenly describing what you do in Gen AI terms that make it impossible to understand just what the company-slash-product-slash-services actually do.Peder: Right.Corey: So, I want to congratulate you on that.Peder: Appreciate that, right? Look, it comes down to the core basics. We are a developer data platform. We bring together all of the capabilities, tools, and functions that developers need when building apps as it pertains to their data functions or data layer, right? And that's why this integrated approach of taking our operational database and building in search, or stream processing, or vector search, all of the things that we're bringing to the platform enable developers to move faster. And what that says is, we're great for all use cases out there, not just Gen AI use cases. We're great for all use cases where customers are building applications to change the way that they're engaging with the customers.Corey: And what I like about this is that you're clearly integrating this stuff under the hood. You are talking to people who are building fascinating stuff, you're building things yourself, but you're not wrapping yourself in the mantle of, “This is exactly what we do because it's trendy right now.” And I appreciate that. It's still intelligible, and I wouldn't think that I had to congratulate someone on, “Wow, you build marketing that a human being can extract meaning from. That's amazing.” But in 2023, the closing days thereof, it very much is.Peder: Yep, yep. And it speaks a lot to the technology that we've built because, you know, on one side—it reminds me a lot of the early days of cloud where everything was kind of cloud-washed for a bit, we're seeing a little bit of that in the hype cycle that we have right now—sticking to our guns and making sure that we are building a technology platform that enables developers to move quickly, that removing the friction from the developer lifecycle as it pertains to the data layer, that's where the success is right, we have to stay on top of all of the trends, we have to make sure that we're enabling Gen AI, we have to make sure that we're integrating with the Amazon Bedrocks and the CodeWhisperers of the world, right, to go push this stuff forward. But to the point we made earlier, those are capabilities and features of a platform where the higher-level order is to really empower our customers to develop innovative, disruptive, or market-leading technologies for how they engage with their customers.Corey: Yeah. And that it's neat to be able to see that you are empowering companies to do that without feeling the need to basically claim their achievements as your own, which is an honest-to-God hard thing to do, especially as you become a platform company because increasingly, you are the plumbing that makes a lot of the flashy, interesting stuff possible. It's imperative, you can't have those things without the underlying infrastructure, but it's hard to talk about that infrastructure, too.Peder: You know, it's funny, I'm sure all of my colleagues would hate me for saying this, but the wheel doesn't turn without the ball bearing. Somebody still has to build the ball bearing in order for that sucker to move, right? And that's the thing. This is the infrastructure, this is the heart of everything that businesses need to build applications. And one of the—you know, another kind of snide comment I've made to some of my colleagues here is, if you think about every market-leading app, in fact, let's go to the biggest experiences you and I use on a daily basis, I'm pretty sure you're booking travel online, you're searching for stuff on Google, you're buying stuff through Amazon, you're renting a house through Airbnb, and you're listening to your music through Spotify. What are those? Those are databases with a search engine.Corey: The world is full of CRUD applications. These are, effectively, simply pretty front-ends to a database. And as much as we'd like to pretend otherwise, that's very much the reality of it. And we want that to be the case. Different modes of interaction, different requirements around them, but yeah, that is what so much of the world is. And I think to ignore that is to honestly blind yourself to a bunch of very key realities here.Peder: That kind of goes back to the original vision for when I came here. It's like, look, everything of value for us, everything that I engage with, is—to your point—it's a database with a great experience on top of it. Now, let's start to layer in this whole Gen AI push, right, what's going on there. We're talking about increased relevance in search, we're talking about new ways of thinking about sourcing information. We've even seen that with some of the latest ChatGPT stuff that developers are using that to get code snippets and figure out how to solve things within their platform.The era of the classic search engine is in the middle of a complete change, and the opportunity, I think, that I see as this moves forward is that there is no incumbent. There isn't somebody who owns this space, so we're just at the beginning of what probably will be the next. Google's, Airbnb's, and Uber's of the world for the next generation. And that's really exciting to see.Corey: I'm right there with you. What are the interesting founding stories at Google is that they wound up calling typical storage vendors for what they needed, got basically ‘screw on out of here, kids,' pricing, so they shrugged, and because they had no real choice to get enterprise-quality hardware, they built a bunch of highly redundant systems on top of basically a bunch of decommissioned crap boxes from the university they were able to more or less get for free or damn near it, and that led to a whole innovation in technology. One of the glorious things about cloud that I think goes under-sold is that I can build a ridiculous application tonight for maybe, what, 27 cents IT infrastructure spend, and if it doesn't work, I round up to dollar, it'll probably get waived because it'll cost more to process the credit card transaction than take my 27 cents. Conversely, if it works, I'm already building with quote-unquote, “Enterprise-grade” components. I don't need to do a massive uplift. I can keep going. And that is no small thing.Peder: No, it's not. When you step back, every single one of those stories was about abstracting that complexity to the end-user. In Google's case, they built their own systems. You or I probably didn't know that they were screwing these things together and soldering them in the back room in the middle of the night. Similarly, when Amazon got started, that was about taking something that was only accessible to a few thousand and now making it accessible to a few million with the costs of 27 cents to build an app.You removed the risk, you removed the friction from enabling a developer to be able to build. That next wave—and this is why I think the things we're doing around Gen AI, and our vector search capabilities, and literally how we're building our developer data platform is about removing that friction and limits and enabling developers to just come in and, you know, effectively do what they do best, which is innovate, versus all of the other things. You know, in the Google world, it's no longer racking and stacking. In the cloud world, it's no longer managing and integrating all the systems. Well, in the data world, it's about making sure that all of those integrations are ready to go and at your fingertips, and you just focus on what you do well, which is creating those new experiences for customers.Corey: So, we're recording this a little bit beforehand, but not by much. You are going to be at re:Invent this year—as am I—for eight nights—Peder: Yes.Corey: Because for me at least, it is crappy cloud Hanukkah, and I've got to deal with that. What have you got coming up? What do you plan to announce? Anything fun, exciting, or are you just there basically, to see how many badges you can actually scan in one day?Peder: Yeah [laugh]. Well, you know, it's shaping up to be quite an incredible week, there's no question. We'll see what brings to town. As you know, re:Invent is a huge event for us. We do a lot within that ecosystem, a lot of the customers that are up on stage talking about the cool things they're doing with AWS, they're also MongoDB customers. So, we go all out. I think you and I spoke before about our position there with SugarCane right on the show floor, I think we've managed to secure you a Friends of Peder all-access pass to SugarCane. So, I look forward to seeing you there, Corey.Corey: Proving my old thesis of, it really is who you know. And thank you for your generosity, please continue.Peder: [laugh]. So, we will be there in full force. We have a number of different innovation talks, we have a bunch of community-related events, working with developers, helping them understand how we play in the space. We're also doing a bunch of hands-on labs and design reviews that help customers basically build better, and build faster, build smarter—to your point earlier on some of the marketing you're getting off of our website. But we're also doing a number of announcements.I think first off, it was actually this last week, we made the announcement of our integrations with Amazon—or—yeah, Amazon CodeWhisperer. So, their code generation tool for developers has now been fully trained on MongoDB so that you can take advantage of some of these code generation tools with MongoDB Atlas on AWS. Similarly, there's been a lot of noise around what Amazon is doing with Bedrock and the ability to automate certain tasks and things for developers. We are going to be announcing our integrations with Agents for Amazon Bedrock being supported inside of MongoDB Atlas, so we're excited to see that, kind of, move forward. And then ultimately, we're really there to celebrate our customers and connect them so that they can share what they're doing with many peers and others in the space to give them that inspiration that you so eloquently talked about, which is, don't market your stuff; let your customers tell what they're able to do with your stuff, and that'll set you up for success in the future.Corey: I'm looking forward to seeing what you announce in conjunction with what AWS announces, and the interplay between those two. As always, I'm going to basically ignore 90% of what both companies say and talk instead to customers, and, “What are you doing with it?” Because that's the only way to get truth out of it. And, frankly, I've been paying increasing amounts of attention to MongoDB over the past few years, just because of what people I trust who are actually good at databases have to say about you folks. Like, my friends at RedMonk always like to say—I've stolen the line from them—“You can buy my attention, but not my opinion.”Peder: A hundred percent.Corey: You've earned the opinion that you have, at this point. Thank you for your sponsorship; it doesn't hurt, but again, you don't get to buy endorsements. I like what you're doing. Please keep going.Peder: No, I appreciate that, Corey. You've always been supportive, and definitely appreciate the opportunity to come on Screaming in the Cloud again. And I'll just push back to that Friends of Peder. There's, you know, also a little bit of ulterior motive there. It's not just who you know, but it's [crosstalk 00:34:39]—Corey: It's also validating that you have friends. I get it. I get it.Peder: Oh yeah, I know, right? And I don't have many, but I have a few. But the interesting thing there is we're going to be able to connect you with a number of the customers doing some of these cool things on top of MongoDB Atlas.Corey: I look forward to it. Thank you so much for your time. Peder Ulander, Chief Marketing Officer at MongoDB. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud, brought to us by our friends at Mongo. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review in your podcast platform of choice, along with an angry, insulting comment that I will ignore because you basically wrapped it so tightly in Generative AI messaging that I don't know what the hell your point is supposed to be.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.

The Worst Idea Of All Time

Miranda is worried about her son Brady because he is getting really good at making French fries, and the boys have reason to suspect he is rediscovering his true identity. In fact, his rediscovery of rats may even be the reason behind his romantic tryst with Lily (of Charlotte and Runkle fame), and before the end of the season, we just might see the ascendance of a new Rat Queen. Che Diaz is getting back into stand up, Herbert Wexley is showing his true colours, and while we still have time, Tim ranks the core characters in the...And Just Like That universe from most to least likeable: the results WILL NOT SHOCK YOU. We also run unnecessarily biological (yet still unlikely) analysis on a surfer dude's penis. And Tim curses (by saying Crud).Intro theme: Brendan LordanOutro theme: SterlingSupport us via our Substack for access to premium content Hosted on Acast. See acast.com/privacy for more information.