Settlement in Erongo Region, Namibia
POPULARITY
Categories
Dans cet épisode, Emmanuel et Antonio discutent de divers sujets liés au développement: Applets (et oui), app iOS développées sous Linux, le protocole A2A, l'accessibilité, les assistants de code AI en ligne de commande (vous n'y échapperez pas)… Mais aussi des approches méthodologiques et architecturales comme l'architecture hexagonale, les tech radars, l'expert généraliste et bien d'autres choses encore. Enregistré le 11 juillet 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-328.mp3 ou en vidéo sur YouTube. News Langages Les Applets Java c'est terminé pour de bon… enfin, bientot: https://openjdk.org/jeps/504 Les navigateurs web ne supportent plus les applets. L'API Applet et l'outil appletviewer ont été dépréciés dans JDK 9 (2017). L'outil appletviewer a été supprimé dans JDK 11 (2018). Depuis, impossible d'exécuter des applets avec le JDK. L'API Applet a été marquée pour suppression dans JDK 17 (2021). Le Security Manager, essentiel pour exécuter des applets de façon sécurisée, a été désactivé définitivement dans JDK 24 (2025). Librairies Quarkus 3.24 avec la notion d'extensions qui peuvent fournir des capacités à des assistants https://quarkus.io/blog/quarkus-3-24-released/ les assistants typiquement IA, ont accès a des capacités des extensions Par exemple générer un client à partir d'openAPI Offrir un accès à la,base de données en dev via le schéma. L'intégration d'Hibernate 7 dans Quarkus https://quarkus.io/blog/hibernate7-on-quarkus/ Jakarta data api restriction nouvelle Injection du SchemaManager Sortie de Micronaut 4.9 https://micronaut.io/2025/06/30/micronaut-framework-4-9-0-released/ Core : Mise à jour vers Netty 4.2.2 (attention, peut affecter les perfs). Nouveau mode expérimental “Event loop Carrier” pour exécuter des virtual threads sur l'event loop Netty. Nouvelle annotation @ClassImport pour traiter des classes déjà compilées. Arrivée des @Mixin (Java uniquement) pour modifier les métadonnées d'annotations Micronaut sans altérer les classes originales. HTTP/3 : Changement de dépendance pour le support expérimental. Graceful Shutdown : Nouvelle API pour un arrêt en douceur des applications. Cache Control : API fluente pour construire facilement l'en-tête HTTP Cache-Control. KSP 2 : Support de KSP 2 (à partir de 2.0.2) et testé avec Kotlin 2. Jakarta Data : Implémentation de la spécification Jakarta Data 1.0. gRPC : Support du JSON pour envoyer des messages sérialisés via un POST HTTP. ProjectGen : Nouveau module expérimental pour générer des projets JVM (Gradle ou Maven) via une API. Un super article sur experimenter avec les event loops reactives dans les virtualthreads https://micronaut.io/2025/06/30/transitioning-to-virtual-threads-using-the-micronaut-loom-carrier/ Malheureusement cela demander le hacker le JDK C'est un article de micronaut mais le travail a ete collaboratif avec les equipes de Red Hat OpenJDK, Red Hat perf et de Quarkus et Vert.x Pour les curieux c'est un bon article Ubuntu offre un outil de creation de container pour Spring notamment https://canonical.com/blog/spring-boot-containers-made-easy creer des images OCI pour les applications Spring Boot basées sur Ubuntu base images bien sur utilise jlink pour reduire la taille pas sur de voir le gros avantage vs d'autres solutions plus portables d'ailleurs Canonical entre dans la danse des builds d'openjdk Le SDK Java de A2A contribué par Red Hat est sorti https://quarkus.io/blog/a2a-project-launches-java-sdk/ A2A est un protocole initié par Google et donne à la fondation Linux Il permet à des agents de se décrire et d'interagir entre eux Agent cards, skills, tâche, contexte A2A complémente MCP Red hat a implémenté le SDK Java avec le conseil des équipes Google En quelques annotations et classes on a un agent card, un client A2A et un serveur avec l'échange de messages via le protocole A2A Comment configurer mockito sans warning après java 21 https://rieckpil.de/how-to-configure-mockito-agent-for-java-21-without-warning/ les agents chargés dynamiquement sont déconseillés et seront interdis bientôt Un des usages est mockito via bytebuddy L'avantage est que la,configuration était transparente Mais bon sécurité oblige c'est fini. Donc l'article décrit comment configurer maven gradle pour mettre l'agent au démarrage des tests Et aussi comment configurer cela dans IntelliJ idea. Moins simple malheureusement Web Des raisons “égoïstes” de rendre les UIs plus accessibles https://nolanlawson.com/2025/06/16/selfish-reasons-for-building-accessible-uis/ Raisons égoïstes : Des avantages personnels pour les développeurs de créer des interfaces utilisateurs (UI) accessibles, au-delà des arguments moraux. Débogage facilité : Une interface accessible, avec une structure sémantique claire, est plus facile à déboguer qu'un code désordonné (la « soupe de div »). Noms standardisés : L'accessibilité fournit un vocabulaire standard (par exemple, les directives WAI-ARIA) pour nommer les composants d'interface, ce qui aide à la clarté et à la structuration du code. Tests simplifiés : Il est plus simple d'écrire des tests automatisés pour des éléments d'interface accessibles, car ils peuvent être ciblés de manière plus fiable et sémantique. Après 20 ans de stagnation, la spécification du format d'image PNG évolue enfin ! https://www.programmax.net/articles/png-is-back/ Objectif : Maintenir la pertinence et la compétitivité du format. Recommandation : Soutenu par des institutions comme la Bibliothèque du Congrès américain. Nouveautés Clés :Prise en charge du HDR (High Dynamic Range) pour une plus grande gamme de couleurs. Reconnaissance officielle des PNG animés (APNG). Support des métadonnées Exif (copyright, géolocalisation, etc.). Support Actuel : Déjà intégré dans Chrome, Safari, Firefox, iOS, macOS et Photoshop. Futur :Prochaine édition : focus sur l'interopérabilité entre HDR et SDR. Édition suivante : améliorations de la compression. Avec le projet open source Xtool, on peut maintenant construire des applications iOS sur Linux ou Windows, sans avoir besoin d'avoir obligatoirement un Mac https://xtool.sh/tutorials/xtool/ Un tutoriel très bien fait explique comment faire : Création d'un nouveau projet via la commande xtool new. Génération d'un package Swift avec des fichiers clés comme Package.swift et xtool.yml. Build et exécution de l'app sur un appareil iOS avec xtool dev. Connexion de l'appareil en USB, gestion du jumelage et du Mode Développeur. xtool gère automatiquement les certificats, profils de provisionnement et la signature de l'app. Modification du code de l'interface utilisateur (ex: ContentView.swift). Reconstruction et réinstallation rapide de l'app mise à jour avec xtool dev. xtool est basé sur VSCode sur la partie IDE Data et Intelligence Artificielle Nouvelle edition du best seller mondial “Understanding LangChain4j” : https://www.linkedin.com/posts/agoncal_langchain4j-java-ai-activity-7342825482830200833-rtw8/ Mise a jour des APIs (de LC4j 0.35 a 1.1.0) Nouveaux Chapitres sur MCP / Easy RAG / JSon Response Nouveaux modeles (GitHub Model, DeepSeek, Foundry Local) Mise a jour des modeles existants (GPT-4.1, Claude 3.7…) Google donne A2A a la Foundation Linux https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/ Annonce du projet Agent2Agent (A2A) : Lors du sommet Open Source Summit North America, la Linux Foundation a annoncé la création du projet Agent2Agent, en partenariat avec Google, AWS, Microsoft, Cisco, Salesforce, SAP et ServiceNow. Objectif du protocole A2A : Ce protocole vise à établir une norme ouverte pour permettre aux agents d'intelligence artificielle (IA) de communiquer, collaborer et coordonner des tâches complexes entre eux, indépendamment de leur fournisseur. Transfert de Google à la communauté open source : Google a transféré la spécification du protocole A2A, les SDK associés et les outils de développement à la Linux Foundation pour garantir une gouvernance neutre et communautaire. Soutien de l'industrie : Plus de 100 entreprises soutiennent déjà le protocole. AWS et Cisco sont les derniers à l'avoir validé. Chaque entreprise partenaire a souligné l'importance de l'interopérabilité et de la collaboration ouverte pour l'avenir de l'IA. Objectifs de la fondation A2A : Établir une norme universelle pour l'interopérabilité des agents IA. Favoriser un écosystème mondial de développeurs et d'innovateurs. Garantir une gouvernance neutre et ouverte. Accélérer l'innovation sécurisée et collaborative. parler de la spec et surement dire qu'on aura l'occasion d'y revenir Gemini CLI :https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ Agent IA dans le terminal : Gemini CLI permet d'utiliser l'IA Gemini directement depuis le terminal. Gratuit avec compte Google : Accès à Gemini 2.5 Pro avec des limites généreuses. Fonctionnalités puissantes : Génère du code, exécute des commandes, automatise des tâches. Open source : Personnalisable et extensible par la communauté. Complément de Code Assist : Fonctionne aussi avec les IDE comme VS Code. Au lieu de blocker les IAs sur vos sites vous pouvez peut-être les guider avec les fichiers LLMs.txt https://llmstxt.org/ Exemples du projet angular: llms.txt un simple index avec des liens : https://angular.dev/llms.txt lllms-full.txt une version bien plus détaillée : https://angular.dev/llms-full.txt Outillage Les commits dans Git sont immuables, mais saviez vous que vous pouviez rajouter / mettre à jour des “notes” sur les commits ? https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coolest-most-unloved-feature/ Fonctionnalité méconnue : git notes est une fonctionnalité puissante mais peu utilisée de Git. Ajout de métadonnées : Permet d'attacher des informations à des commits existants sans en modifier le hash. Cas d'usage : Idéal pour ajouter des données issues de systèmes automatisés (builds, tickets, etc.). Revue de code distribuée : Des outils comme git-appraise ont été construits sur git notes pour permettre une revue de code entièrement distribuée, indépendante des forges (GitHub, GitLab). Peu populaire : Son interface complexe et le manque de support des plateformes de forge ont limité son adoption (GitHub n'affiche même pas/plus les notes). Indépendance des forges : git notes offre une voie vers une plus grande indépendance vis-à-vis des plateformes centralisées, en distribuant l'historique du projet avec le code lui-même. Un aperçu dur Spring Boot debugger dans IntelliJ idea ultimate https://blog.jetbrains.com/idea/2025/06/demystifying-spring-boot-with-spring-debugger/ montre cet outil qui donne du contexte spécifique à Spring comme les beans non activés, ceux mockés, la valeur des configs, l'état des transactions Il permet de visualiser tous les beans Spring directement dans la vue projet, avec les beans non instanciés grisés et les beans mockés marqués en orange pour les tests Il résout le problème de résolution des propriétés en affichant la valeur effective en temps réel dans les fichiers properties et yaml, avec la source exacte des valeurs surchargées Il affiche des indicateurs visuels pour les méthodes exécutées dans des transactions actives, avec les détails complets de la transaction et une hiérarchie visuelle pour les transactions imbriquées Il détecte automatiquement toutes les connexions DataSource actives et les intègre avec la fenêtre d'outils Database d'IntelliJ IDEA pour l'inspection Il permet l'auto-complétion et l'invocation de tous les beans chargés dans l'évaluateur d'expression, fonctionnant comme un REPL pour le contexte Spring Il fonctionne sans agent runtime supplémentaire en utilisant des breakpoints non-suspendus dans les bibliothèques Spring Boot pour analyser les données localement Une liste communautaire sur les assistants IA pour le code, lancée par Lize Raes https://aitoolcomparator.com/ tableau comparatif qui permet de voir les différentes fonctionnalités supportées par ces outils Architecture Un article sur l'architecture hexagonale en Java https://foojay.io/today/clean-and-modular-java-a-hexagonal-architecture-approach/ article introductif mais avec exemple sur l'architecture hexagonale entre le domaine, l'application et l‘infrastructure Le domain est sans dépendance L‘appli spécifique à l'application mais sans dépendance technique explique le flow L'infrastructure aura les dépendances à vos frameworks spring, Quarkus Micronaut, Kafka etc Je suis naturellement pas fan de l'architecture hexagonale en terme de volume de code vs le gain surtout en microservices mais c'est toujours intéressant de se challenger et de regarder le bénéfice coût. Gardez un œil sur les technologies avec les tech radar https://www.sfeir.dev/cloud/tech-radar-gardez-un-oeil-sur-le-paysage-technologique/ Le Tech Radar est crucial pour la veille technologique continue et la prise de décision éclairée. Il catégorise les technologies en Adopt, Trial, Assess, Hold, selon leur maturité et pertinence. Il est recommandé de créer son propre Tech Radar pour l'adapter aux besoins spécifiques, en s'inspirant des Radars publics. Utilisez des outils de découverte (Alternativeto), de tendance (Google Trends), de gestion d'obsolescence (End-of-life.date) et d'apprentissage (roadmap.sh). Restez informé via les blogs, podcasts, newsletters (TLDR), et les réseaux sociaux/communautés (X, Slack). L'objectif est de rester compétitif et de faire des choix technologiques stratégiques. Attention à ne pas sous-estimer son coût de maintenance Méthodologies Le concept d'expert generaliste https://martinfowler.com/articles/expert-generalist.html L'industrie pousse vers une spécialisation étroite, mais les collègues les plus efficaces excellent dans plusieurs domaines à la fois Un développeur Python expérimenté peut rapidement devenir productif dans une équipe Java grâce aux concepts fondamentaux partagés L'expertise réelle comporte deux aspects : la profondeur dans un domaine et la capacité d'apprendre rapidement Les Expert Generalists développent une maîtrise durable au niveau des principes fondamentaux plutôt que des outils spécifiques La curiosité est essentielle : ils explorent les nouvelles technologies et s'assurent de comprendre les réponses au lieu de copier-coller du code La collaboration est vitale car ils savent qu'ils ne peuvent pas tout maîtriser et travaillent efficacement avec des spécialistes L'humilité les pousse à d'abord comprendre pourquoi les choses fonctionnent d'une certaine manière avant de les remettre en question Le focus client canalise leur curiosité vers ce qui aide réellement les utilisateurs à exceller dans leur travail L'industrie doit traiter “Expert Generalist” comme une compétence de première classe à nommer, évaluer et former ca me rappelle le technical staff Un article sur les métriques métier et leurs valeurs https://blog.ippon.fr/2025/07/02/monitoring-metier-comment-va-vraiment-ton-service-2/ un article de rappel sur la valeur du monitoring métier et ses valeurs Le monitoring technique traditionnel (CPU, serveurs, API) ne garantit pas que le service fonctionne correctement pour l'utilisateur final. Le monitoring métier complète le monitoring technique en se concentrant sur l'expérience réelle des utilisateurs plutôt que sur les composants isolés. Il surveille des parcours critiques concrets comme “un client peut-il finaliser sa commande ?” au lieu d'indicateurs abstraits. Les métriques métier sont directement actionnables : taux de succès, délais moyens et volumes d'erreurs permettent de prioriser les actions. C'est un outil de pilotage stratégique qui améliore la réactivité, la priorisation et le dialogue entre équipes techniques et métier. La mise en place suit 5 étapes : dashboard technique fiable, identification des parcours critiques, traduction en indicateurs, centralisation et suivi dans la durée. Une Definition of Done doit formaliser des critères objectifs avant d'instrumenter tout parcours métier. Les indicateurs mesurables incluent les points de passage réussis/échoués, les temps entre actions et le respect des règles métier. Les dashboards doivent être intégrés dans les rituels quotidiens avec un système d'alertes temps réel compréhensibles. Le dispositif doit évoluer continuellement avec les transformations produit en questionnant chaque incident pour améliorer la détection. La difficulté c'est effectivement l'évolution métier par exemple peu de commandes la nuit etc ça fait partie de la boîte à outils SRE Sécurité Toujours à la recherche du S de Sécurité dans les MCP https://www.darkreading.com/cloud-security/hundreds-mcp-servers-ai-models-abuse-rce analyse des serveurs mcp ouverts et accessibles beaucoup ne font pas de sanity check des parametres si vous les utilisez dans votre appel genAI vous vous exposer ils ne sont pas mauvais fondamentalement mais n'ont pas encore de standardisation de securite si usage local prefferer stdio ou restreindre SSE à 127.0.0.1 Loi, société et organisation Nicolas Martignole, le même qui a créé le logo des Cast Codeurs, s'interroge sur les voies possibles des développeurs face à l'impact de l'IA sur notre métier https://touilleur-express.fr/2025/06/23/ni-manager-ni-contributeur-individuel/ Évolution des carrières de développeur : L'IA transforme les parcours traditionnels (manager ou expert technique). Chef d'Orchestre d'IA : Ancien manager qui pilote des IA, définit les architectures et valide le code généré. Artisan Augmenté : Développeur utilisant l'IA comme un outil pour coder plus vite et résoudre des problèmes complexes. Philosophe du Code : Un nouveau rôle centré sur le “pourquoi” du code, la conceptualisation de systèmes et l'éthique de l'IA. Charge cognitive de validation : Nouvelle charge mentale créée par la nécessité de vérifier le travail des IA. Réflexion sur l'impact : L'article invite à choisir son impact : orchestrer, créer ou guider. Entraîner les IAs sur des livres protégés (copyright) est acceptable (fair use) mais les stocker ne l'est pas https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ Victoire pour Anthropic (jusqu'au prochain procès): L'entreprise a obtenu gain de cause dans un procès très suivi concernant l'entraînement de son IA, Claude, avec des œuvres protégées par le droit d'auteur. “Fair Use” en force : Le juge a estimé que l'utilisation des livres pour entraîner l'IA relevait du “fair use” (usage équitable) car il s'agit d'une transformation du contenu, pas d'une simple reproduction. Nuance importante : Cependant, le stockage de ces œuvres dans une “bibliothèque centrale” sans autorisation a été jugé illégal, ce qui souligne la complexité de la gestion des données pour les modèles d'IA. Luc Julia, son audition au sénat https://videos.senat.fr/video.5486945_685259f55eac4.ia–audition-de-luc-julia-concepteur-de-siri On aime ou pas on aide pas Luc Julia et sa vision de l'IA . C'est un eversion encore plus longue mais dans le même thème que sa keynote à Devoxx France 2025 ( https://www.youtube.com/watch?v=JdxjGZBtp_k ) Nature et limites de l'IA : Luc Julia a insisté sur le fait que l'intelligence artificielle est une “évolution” plutôt qu'une “révolution”. Il a rappelé qu'elle repose sur des mathématiques et n'est pas “magique”. Il a également alerté sur le manque de fiabilité des informations fournies par les IA génératives comme ChatGPT, soulignant qu'« on ne peut pas leur faire confiance » car elles peuvent se tromper et que leur pertinence diminue avec le temps. Régulation de l'IA : Il a plaidé pour une régulation “intelligente et éclairée”, qui devrait se faire a posteriori afin de ne pas freiner l'innovation. Selon lui, cette régulation doit être basée sur les faits et non sur une analyse des risques a priori. Place de la France : Luc Julia a affirmé que la France possédait des chercheurs de très haut niveau et faisait partie des meilleurs mondiaux dans le domaine de l'IA. Il a cependant soulevé le problème du financement de la recherche et de l'innovation en France. IA et Société : L'audition a traité des impacts de l'IA sur la vie privée, le monde du travail et l'éducation. Luc Julia a souligné l'importance de développer l'esprit critique, notamment chez les jeunes, pour apprendre à vérifier les informations générées par les IA. Applications concrètes et futures : Le cas de la voiture autonome a été discuté, Luc Julia expliquant les différents niveaux d'autonomie et les défis restants. Il a également affirmé que l'intelligence artificielle générale (AGI), une IA qui dépasserait l'homme dans tous les domaines, est “impossible” avec les technologies actuelles. Rubrique débutant Les weakreferences et le finalize https://dzone.com/articles/advanced-java-garbage-collection-concepts un petit rappel utile sur les pièges de la méthode finalize qui peut ne jamais être invoquée Les risques de bug si finalize ne fini jamais Finalize rend le travail du garbage collector beaucoup plus complexe et inefficace Weak references sont utiles mais leur libération n'est pas contrôlable. Donc à ne pas abuser. Il y a aussi les soft et phantom references mais les usages ne sont assez subtils et complexe en fonction du GC. Le sériel va traiter les weak avant les soft, parallel non Le g1 ça dépend de la région Z1 ça dépend car le traitement est asynchrone Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-19 juillet 2025 : DebConf25 - Brest (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 22-24 septembre 2025 : Kernel Recipes - Paris (France) 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2 octobre 2025 : Nantes Craft - Nantes (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 17 octobre 2025 : Sylius Con 2025 - Lyon (France) 17 octobre 2025 : ScalaIO 2025 - Paris (France) 20 octobre 2025 : Codeurs en Seine - Rouen (France) 23 octobre 2025 : Cloud Nord - Lille (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 6 novembre 2025 : Agile Tour Aix-Marseille 2025 - Gardanne (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 1-2 décembre 2025 : Tech Rocks Summit 2025 - Paris (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 9-11 décembre 2025 : APIdays Paris - Paris (France) 9-11 décembre 2025 : Green IO Paris - Paris (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
On this episode, Pete and Julie welcome long-time HR technology industry executive and Partner at Acadian Ventures, Thomas Otter, to share his insights on where ‘hype meets truth' on the state and future of HR and payroll technology! Thomas shares his path from studying labor law and industrial relations to finding a career in HR technology and now fueling the future of HR tech as an investor. Thomas provides insight into the Acadian Ventures investment thesis and lens, as well as why the firm is betting big on what he calls the “infrastructure era” of HR tech. He explains why AI is prompting the industry to reassess long-held assumptions and challenges the hype that often surrounds so-called ‘game-changing' innovations, as well as how unstructured data and conversational UIs are transforming our interactions with systems. Thomas makes a compelling case for why HR leaders must understand their business at its core - starting by knowing their organization's “gold price.” Plus, advice on how HR buyers can cut through the noise to focus on what solutions actually deliver to solve problems and drive business outcomes! Connect with Thomas: Acadian Ventures: https://www.acadianventures.com/ LI: https://www.linkedin.com/in/thomasotter/ SubStack: https://thomasotter.substack.com/ “Wedding cakes vs. Cupcakes” blog: https://www.unleash.ai/hr-technology/thomas-otter-on-the-wedding-cake-v-cupcake-of-hr-tech/ Connect with the show: LinkedIn: http://linkedin.com/company/hr-payroll-2-0 X: @HRPayroll2_0 @PeteTiliakos @JulieFer_HR BlueSky: @hrpayroll2o.bsky.social YouTube: https://www.youtube.com/@HRPAYROLL2_0
The 2025 generative AI image market is a trade-off between aesthetic quality, instruction-following, and user control. This episode analyzes the key platforms, comparing Midjourney's artistic output against the superior text generation and prompt adherence of GPT-4o and Imagen 4, the commercial safety of Adobe Firefly, and the total customization of Stable Diffusion. Links Notes and resources at ocdevel.com/mlg/mla-25 Try a walking desk - stay healthy & sharp while you learn & code Build the future of multi-agent software with AGNTCY. The State of the Market The market is split by three core philosophies: The "Artist" (Midjourney): Prioritizes aesthetic excellence and cinematic output, sacrificing precise user control and instruction following. The "Collaborator" (GPT-4o, Imagen 4): Extensions of LLMs that excel at conversational co-creation, complex instruction following, and integration into productivity workflows. The "Sovereign Toolkit" (Stable Diffusion): An open-source engine offering users unparalleled control, customization, and privacy in exchange for technical engagement. Table 1: 2025 Generative AI Image Tool At-a-Glance Comparison Tool Parent Company Access Method(s) Pricing Core Strength Best For Midjourney v7 Midjourney, Inc. Web App, Discord Subscription Artistic Aesthetics & Photorealism Fine Art, Concept Design, Stylized Visuals GPT-4o OpenAI ChatGPT, API Freemium/Sub Conversational Control & Instruction Following Marketing Materials, UI/UX Mockups, Logos Google Imagen 4 Google Gemini, Workspace, Vertex AI Freemium/Sub Ecosystem Integration & Speed Business Presentations, Educational Content Stable Diffusion 3 Stability AI Local Install, Web UIs, API Open Source Ultimate Customization & Control Developers, Power Users, Bespoke Workflows Adobe Firefly Adobe Creative Cloud Apps, Web App Subscription Commercial Safety & Workflow Integration Professional Designers, Agencies, Enterprise Core Platforms Midjourney v7: Premium choice for artistic quality. Features: Web UI with Draft Mode, user personalization, emerging video/3D. Weaknesses: Poor text generation, poor prompt adherence, public images on cheap plans, no API/bans automation. OpenAI GPT-4o: An intelligent co-creator for controlled generation. Features: Conversational refinement, superior text rendering, understands uploaded image context. Weaknesses: Slower than competitors, generates one image at a time, strict content filters. Google Imagen 4: Pragmatic tool focused on speed and ecosystem integration. Features: High-quality photorealism, fast generation, strong text rendering, multilingual. Weaknesses: Less artistic flair; value is dependent on Google ecosystem investment. Stable Diffusion 3: Open-source engine for maximum user control. Features: MMDiT architecture improves prompt/text handling, scalable models, vast ecosystem (LoRAs/ControlNet). Weaknesses: Steep learning curve, quality is user-dependent. Adobe Firefly: Focused on commercial safety and professional workflow integration. Features: Trained on Adobe Stock for legal indemnity, Generative Fill/Expand tools. Weaknesses: Creative range limited by training data, requires Adobe subscription/credits. Tools and Concepts In-painting: Modifying a masked area inside an image. Out-painting: Extending an image beyond its original borders. LoRA (Low-Rank Adaptation): A small file that applies a fine-tuned style, character, or concept to a base model. ControlNet: Uses a reference image (e.g., pose, sketch) to enforce the composition, structure, or pose of the output. A1111 vs. ComfyUI: Two main UIs for Stable Diffusion. A1111 is a beginner-friendly tabbed interface; ComfyUI is a node-based interface for complex, efficient, and automated workflows. Workflows "Best of Both Worlds": Generate aesthetic base images in Midjourney, then composite, edit, and add text with precision in Photoshop/Firefly. Single-Ecosystem: Work entirely within Adobe Creative Cloud or Google Workspace for seamless integration, commercial safety (Adobe), and convenience (Google). "Build Your Own Factory": Use ComfyUI to build automated, multi-step pipelines for consistent character generation, advanced upscaling, and video. Decision Framework Choose by Goal: Fine Art/Concept Art: Midjourney. Logos/Ads with Text: GPT-4o, Google Imagen 4, or specialist Ideogram. Consistent Character in Specific Pose: Stable Diffusion with a Character LoRA and ControlNet (OpenPose). Editing/Expanding an Existing Photo: Adobe Photoshop with Firefly. Exclusion Rules: If you need legible text, exclude Midjourney. If you need absolute privacy or zero cost (post-hardware), Stable Diffusion is the only option. If you need guaranteed commercial legal safety, use Adobe Firefly. If you need an API for a product, use OpenAI or Google; automating Midjourney is a bannable offense.
When does a framework reach maturity? For Joel Hawksley, lead maintainer of GitHub's ViewComponent framework, the answer comes with the upcoming fourth major release – a milestone that marks not just new features, but a transition to long-term support mode.Joel takes us behind the scenes of his seven-year journey at GitHub, where an idea sketched on an airplane has evolved into a critical part of how the platform renders its interfaces. With candid insights, he explains why ViewComponent is now considered feature-complete, and the philosophical challenges that remain unresolved around CSS and JavaScript integration within component-based Rails applications.The conversation delves into fascinating territory around GitHub's technical architecture decisions. Joel articulates the clear dividing line between interfaces better suited for React versus Rails, based on his experience building complex UIs like GitHub's merge box. "The ability for a new engineer to come in and modify that code in React is an order of magnitude better," he explains, revealing how pragmatism rather than dogma drives technology choices at scale.Perhaps most compelling are Joel's reflections on accessibility – work he led for years at GitHub. He reveals how accessibility requirements fundamentally reshape engineering approaches, forcing consistency and systematic thinking that might otherwise be overlooked. With 80% of top e-commerce sites facing accessibility lawsuits in recent years, these considerations are becoming unavoidable for developers at companies of all sizes.As a new member of GitHub's Ruby architecture team, Joel also shares fascinating perspectives on Ruby's evolution. He articulates the tension between adding safety guardrails to the language while preserving the flexibility and joy that attracted developers in the first place. "Is it better to take something that is elegant and beautiful and flexible and put it in handcuffs, or is it better to go use a tool that was built with that in mind?" he asks, in a moment of philosophical clarity that will resonate with Ruby developers everywhere.Whether you're using ViewComponent, building accessible interfaces, or thinking about Ruby's future, this episode offers rare insights from someone who has spent years navigating these waters at one of the world's most prominent software platforms. Check out ViewComponent 4.0 RC1 today and share your feedback before the final release!Send us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show
Anthony Viljoen, the chief executive of Andrada Mining talks markets, critical metals, and Namibia. Are better times ahead? A plethora of recent equity issues and other corporate activity suggests they just might be, and Andrada is getting ready by ramping up production at its Uis tin mine in Namibia and using the cashflow to bring new projects forward. All-in-all, the company has exposure to tin, tantalum, lithium, copper, tungsten, niobium and rubidium.
Volgens die Kamer van Mynwese gaan produksie by die Uis Andrada-myne suksesvol voort. Tans produseer die Uis-myn jaarliks ongeveer 1 000 ton tinkonsentraat. Die maatskappy berei voor om dit tot 1 600 ton per jaar te verhoog. Tydens die regering se inligtingsessie het dr. Mupenzeni Ntelamo, die hoofstreeksbeampte van die Erongo-streek, gepraat oor hoe hulle Uis na sy vorige gloriedae wil transformeer.
‘n Beroep word gedoen vir enige vorm van hulp om die reuse-vuur tussen Khorixas en Sorris Sorris, in die rigting van Uis, te help beheer of blus. Daar is volgens plasings op sosiale media reeds meer as 3 000 hektaar welige veld vernietig en daar waai ook 'n sterk wind. Daar is glo min brandslaners en water, en 'n klein groep lede van die gemeenskap veg verbete teen die oormag. 'n Gemeenskapslid het so op sosiale media gerapporteer:
In episode 246 of our SAP on Azure video podcast we talk about SAP Identity Management and Microsoft Entra ID. Since the announcement from SAP about SAP IDM, a lot of customers have already started their journey to move to Entra ID. We have had several customers talking about their experience and also hosted several hands-on sessions. Martin Raepple is key player in most of these discussions and today we want to show in more detail how the journey is evolving. To today he shows us how to integrate Microsoft Entra with SAP Cloud Identity Services and leverage Microsoft Entra's advanced features to migrate and modernize existing SAP IDM workflows, using self-service UIs, integration with SAP data sources, and much more.Find all the links mentioned here: https://www.saponazurepodcast.de/episode246Reach out to us for any feedback / questions:* Goran Condric: https://www.linkedin.com/in/gorancondric/* Holger Bruchelt: https://www.linkedin.com/in/holger-bruchelt/ #Microsoft #SAP #Azure #SAPonAzure #SSO #IDM #EntraID #SAPIAS #Governance
Send us a textMy guests this episode to talk about Norway's antibullying ombudspersons, what they do, their value, and tips for bringing this important initiative to your community are three of Norway's most experienced ombuds: Kaja Vintervold Asmyhr, Bodil Jenssen Houg, and Jon Halvdan Lenning. Our conversation was recorded in front of a live audience at the 2025 World Anti-Bullying Forum, in Stavanger, Norway. The 2025 World Anti-Bullying Forum was hosted by the Norwegian Centre for Learning Environment and Behavioural Research in Education at the University of Stavanger (UiS) in collaboration with the teacher education programs at UiS; the UNESCO Chair in Diversity, Inclusion and Education; the Knowledge Centre for Education; and the Partnership Against Bullying. This biannual conference – the largest and most important of its kind – is an initiative of the Swedish NGO, Friends, and its Presenting Sponsor is BRP, a global leader in powersports products who, through its Ride Out Intimidation program, takes a firm stand against bullying. More information about Kaja Vintervold Asmyhr, Bodil Jenssen Houg, and Jon Halvdan Lenning; the resources they mentioned; the antibullying ombudspersons initiative and its supporters; the World Anti-Bullying Forum; the conference organizers; and BRP's Ride Out Intimidation program is at talkingaboutkids.com.
Justin Searls joins Jerod in Apple's WWDC wake for hot takes about frosty UIs. We go (almost) point-by-point through the keynote, dissecting and reacting along the way. Concentricity!
Justin Searls joins Jerod in Apple's WWDC wake for hot takes about frosty UIs. We go (almost) point-by-point through the keynote, dissecting and reacting along the way. Concentricity!
Curtis Sanders is a children's book author and the new Financial Literacy Program Coordinator at the UIS Career Center. He spoke with Community Voices about his passion for teaching financial literacy and how he's helping students at UIS navigate budgeting, debt, and saving. Curtis also writes children's books based on his family. His latest, Spring City Stories: Oliver's Emotional Odyssey, helps kids understand and express their feelings through colorful illustrations and storytelling.
In the Pit with Cody Schneider | Marketing | Growth | Startups
In this episode, I chat with Jonathan, a rapidly rising expert on Twitter known for building and scaling AI-driven marketing automations using tools like n8n and custom API integrations. We explore the practical realities of "vibe marketing" automation beyond hype, revealing how real-world workflows are being constructed today and why true expertise in marketing is essential for effective automation. Listeners will gain insights into automating audience research, creative production, and ad performance analysis at scale, as well as actionable tips for getting started and leveraging AI tools to 10x their output.Timestamps(00:00) – Introduction to Jonathan and Marketing Automation The host introduces Jonathan and sets the stage for a discussion on modern marketing automation tools and why they're currently so powerful.(02:45) – Jonathan's Background and Automation Journey Jonathan shares how he got into marketing automation, his paid ads background, and the evolution from manual work to automation.(07:30) – Key Tools and Stack for Automation The host and Jonathan discuss their tech stacks, highlighting n8n, railway.com, and custom front-end interfaces to streamline automation.(12:15) – Top Marketing Automation Workflows Jonathan outlines his most effective workflows: audience research, creative generation, and scaling marketing insights.(18:00) – Audience Research Automation: Reddit Scraping and Analysis A deep dive into using n8n to scrape Reddit, filter and analyze discussions, and extract actionable marketing insights and customer language.(25:40) – Twitter Insights Automation How Jonathan automates scraping Twitter for popular posts, identifying top-performing content and structuring it for ongoing content creation.(31:10) – Creative Production Automation Jonathan explains workflows for bulk generating ad variations using OpenAI's Image Gen API, including reference image analysis and prompt engineering.(38:20) – Custom Front-End Interfaces for Workflows The pair discuss integrating user-friendly front-end UIs (using Lovable or Bolt) with n8n backend automations for client and team use.(44:50) – Automating Ad Performance Analysis Jonathan describes a flow for pulling and analyzing Facebook Ads data, using sub-agents for performance analysis, deep research, and new ad creation.(51:10) – Video Ad Automation and Future Trends A look at how video ad automation is evolving and the current limitations and opportunities, including upcoming tools like Google Veo 3.(56:40) – Speeding Up Workflow Creation with Perplexity and Claude The host and Jonathan discuss using AI (Perplexity, Claude 4) to generate n8n workflow JSON, streamlining the automation development process.Key PointsExpertise in Marketing is Essential for Automation: To automate marketing workflows effectively, you need a deep understanding of marketing processes themselves. Only then can you define, script, and automate successful campaigns[1].Automating Audience Research Drives Results: Bulk scraping and analyzing platforms like Reddit and Twitter allow marketers to extract pain points, trigger events, and customer language at scale, informing ad copy and creative direction.Creative Volume is Game-Changing: Automation tools like OpenAI's Image Gen API enable the generation of hundreds of ad variations, feeding algorithms for higher performance and lower costs.Custom Front-Ends Improve Workflow Accessibility: Building user-friendly interfaces (using tools like Lovable or Bolt) for complex n8n automations makes them accessible to non-technical team members and clients.AI Accelerates Workflow Development: Using AI tools like Perplexity and Claude to generate n8n workflow JSON reduces the time and technical skill required to build sophisticated automations.Human-in-the-Loop Remains Critical: While automation handles the heavy lifting, human oversight is still needed for nuanced analysis, curation, and final ad selection.Notable QuotesJonathan: “You have to be an expert at that thing to be able to go and actually build out these automations. But when you do that, you can automate 80% of the work that you previously were doing.”Jonathan: “I literally just tell Claude what I want to build, and then it maps it out for me. And then you kind of have a canvas that is like 60, 70, 80% there depending on the complexity.”Cody: “Your customers are your best advertisers, so taking their exact wording and phrases is for sure going to be an effective marketing strategy a lot of the time.”Actionable Takeaways for Founders, Marketers, and PodcastersStart with a Core Marketing Process: Identify a repeatable marketing workflow you fully understand before attempting to automate it.Invest in Audience Research Automation: Use tools to scrape and analyze discussions on Reddit, Twitter, and other platforms to extract customer pain points and language for your messaging[2].Bulk Generate and Test Creatives: Leverage AI to produce hundreds of ad variations, enabling rapid testing and optimization of creative assets.Automate Performance Analysis: Implement workflows to automatically pull and analyze campaign performance data, allowing you to focus on strategy and execution[8].Simplify Tool Accessibility: Build custom UIs for your automation tools to make them accessible for your entire team, not just engineers.Accelerate Workflow Development: Use AI-powered tools like Perplexity and Claude to generate automation scripts and reduce development time.Brought to you byTalentFiber – Hire top offshore engineers with US experience at half the cost of US hires. - talentfiber.comWhere to the find Guest: https://x.com/vibemarketer_ https://linktr.ee/vibemarketerResources Mentionedhttps://www.youtube.com/@nateherkhttps://www.youtube.com/@Mark_Kashefhttps://www.youtube.com/@AI-GPTWorkshop/videosRapidAPI – Access a wide range of third-party APIs for quick integrations. - rapidapi.comApify – Scrape websites and extract data at scale. - apify.comTwitterAPI.io – Free and affordable Twitter data scraping tool. - twitterapi.io
In this episode of the StudioNerds Podcast, we dive into the uncomfortable truth about your plugin folder. Why are so many plugin companies still pushing 2015 tech in a 2025 world? Are we stuck using the same sounds with prettier UIs, or is real innovation finally on the way?Let's break down who's evolving, and who's coasting.If you're a producer and want to get on Colossal, and sell more beats, register now at colossal.fm/producers - and it's free, no subscription costs!➡️https://colossal.fmhttps://www.instagram.com/colossal_apphttps://www.tiktok.com/@colossal.apphttps://www.youtube.com/@colossal-app➡️ Get Our Rosetta Plugins: https://cutt.ly/RwAEmuRF➡️ Our Site: https://www.helpmedevvon.com Please subscribe to our YouTube and rate our podcast it helps us a lot!➡️ https://linktr.ee/mystudionerdsFollow Ushttps://www.instagram.com/helpmedevvon➡️Join this channel to get access to perks:https://www.youtube.com/channel/UCGyDjbX9F9ARt_8sgv5kGDg/joinFollow The CastDevvon Terrellhttps://instagram.com/helpmedevvonLJhttps://instagram.com/prodbyljeanHere is L.Jean channel! https://youtube.com/@SweataWeathaCourtney Taylorhttps://instagram.com/officialcourtneytaylor#podcast #mixing #mastering
In this discussion, Cristina sits down with Faria Anzum, Developer Experience Product Design Lead at Canva, to explore her approach to SaaS integrations, developer experience, as well as the impact of AI on building scalable partner ecosystems.Faria's Role and Canva's Ecosystem ApproachFaria leads developer experience within Canva's ecosystem team, focusing on optimizing the end-to-end journey for developers—from discovery and documentation to app submission and launch. Her work spans both external developers and internal teams, ensuring feedback loops that drive continuous improvement in tooling and processes.Building for Developers: Key UX ConsiderationsFaria highlights the importance of systems thinking in developer experience, noting that every design decision impacts multiple stages of the developer journey.Unique challenges include blending Canva's established design system with the distinct UIs of external apps, ensuring a seamless and intuitive experience for users.Developer tools must minimize cognitive load and context switching, so Canva strives for consistency across documentation, tooling, and theming to reduce friction for developers moving between environments.Best Practices for Scalable IntegrationsCanva empowers partners to build at scale by providing robust building blocks: app templates, design patterns, and a comprehensive Figma UI kit. This approach streamlines development and ensures consistency in user experience.The team collaborates closely with partners through workshops and design reviews, fostering high-quality integrations and reducing surprises during app submission.Smaller partners particularly benefit from these resources, as they can rely on Canva's design guidance without needing in-house expertise.AI's Transformative Role in Developer WorkflowsAI is lowering barriers to entry, democratizing software development much like Canva did for design. Tools like ChatGPT and Cursor enable even non-coders to build functional integrations using natural language prompts.Canva recently launched Dev MCP server, for Canva apps and integrations, leveraging documentation and starter templates to accelerate development.While Canva can't always distinguish how much of an app is AI-generated, the focus remains on functionality and quality, not the origin of the code.Looking Ahead: Trust, Transparency, and the Future of AI in IntegrationsFaria emphasizes the need for transparency in AI-driven apps—clearly communicating what AI does, how data is used, and allowing users to override AI suggestions to build trust.She predicts a future where AI agents handle more of the integration lifecycle, spinning up apps from natural language descriptions, while human developers refine and innovate at the edges.The conversation closes with a call to embrace these changes, as AI and automation free up developers to focus on more creative and high-impact work.Key TakeawaysCanva's approach to developer experience centers on holistic design, strong building blocks, and close collaboration with partners.AI is fundamentally changing how integrations are built, making development more accessible and efficient.Trust, transparency, and adaptability will be crucial as AI-driven workflows become the norm in SaaS ecosystems.For more insights on tech partnerships, integrations, and APIs, listeners are encouraged to visit Pandium's blog and resources.https://www.pandium.com/blog
With AccessKit, Matt Campbell and Arnold Loubriat took on the ambitious task of abstracting over the accessibility APIs of several target OS' to offer toolkit providers one unified way to make their UIs accessible across platforms. This interview was recorded live at RustWeek 2025 with your host Luuk van der Duim. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps & referenced resources [@1:11] - Introducing Matt and Arnold. [@1:45] - “What are you working on?” [@2:00] - “What does it offer?” [@2:28] - “Who would use AccessKit?” [@2:51] - AccessKit bindings in languages besides Rust. [@3:14] - Unifying accessibility protocols. [@4:47] - AccessKit's API. [@6:58] - AccessKit tree concepts. [@8:47] - When Arnold got involved. [@9:08] - 6000 lines of code to find Matt. [@12:56] - Matt's history with Windows and Microsoft. [@14:00] - How Quorum relates to AccessKit, Quorum [@15:52] - Challenges Quorum UI development faced. [@18:12] - Arnold and Matt on the necessity for back-end testing, Android, iOS - Linux. [@21:45] - On Newton and its Chromium relation. Newton [@27:55] - Newton on other compositers. [@29:20] - Wayland protocol purists versus Gnome D-Bus frictions. “Next-generation accessibility architecture” document [@31:03] - Where can people find Arnold? Arnold on GitHub as ‘DataTriny' Arnold Loubriat on LinkedIn DataTriny on Mastodon [@31:39] Where can people find Matt? Matt Campbell on GitHub AccessKit website. [@31:53] Your host can be found on GitHub. Luuk van der Duim Odilia screen reader atspi crate Credits Intro Theme: Aerocity Audio Editing: Luuk van der Duim Samples used in interruption: “Vinyl - 45RPM - Start 2” by: day-garwood License: Attribution 3.0 Record Scratch #3 by: musicvision31 License: Creative Commons 0 Hosting Infrastructure: Jon Gjengset Show Notes: Luuk van der Duim Hosts: Luuk van der Duim
In this episode of Corporate Treasury 101, we sit down with Philip Costa Hibberd and Jan-Willem Attevelt from Automation Boutique to explore how Robotic Process Automation (RPA) and Artificial Intelligence (AI) are transforming treasury and finance teams. From eliminating manual workflows to building scalable automation ecosystems, Philip and Jan-Willem explain how modern tech can unlock efficiency and empower treasury teams to focus on value-driving activities. Whether you're new to automation or considering a rollout in your organization, this is a treasury-forward deep dive worth tuning into.Philip Costa Hibberd is the founder of Automation Boutique and a seasoned automation manager specializing in finance and treasury. Jan-Willem Attevelt is a technical expert in RPA, APIs, and AI, and brings extensive treasury experience to his work. Together, they help global organizations streamline operations through practical, no-nonsense automation strategies.What You'll Learn in This EpisodeWhat RPA is and how it fits into broader treasury automation effortsWhy attended vs unattended bots matter—and where AI fits into the mixPractical use cases: from FX rate retrieval to multi-bank PDF parsingWhy RPA is “macros on steroids”—and what it can do that Excel can'tThe risks of automating broken processes and why simplicity winsEpisode Breakdown & Timestamps [00:00] – Introduction and AFP partnership ($100 off CTP certification) [01:29] – Meet Philip and Jan-Willem from Automation Boutique [03:02] – What is automation in Treasury, and why does it matter [06:02] – What is RPA and how it works in a Treasury context [08:04] – Security, attended vs unattended bots, and credential handling [11:25] – How RPA and AI can work together in automation flows [15:23] – Simple RPA use case: extracting data from PDF confirmations [19:52] – Why RPA isn't always the best solution – when to say no [24:17] – RPA vs Excel macros: key differences and advantages [30:22] – How bots interact with applications and system UIs [36:05] – Is RPA suitable for small businesses? Cost and scalability [39:40] – Using Excel and Power Query as a starting point for automation [43:48] – Skills needed to start with RPA (free training and tools) [46:53] – How to maintain RPA implementations without deep tech skills [51:19] – Drawbacks of RPA: automating messy processes creates bigger problems [54:41] – Common Treasury RPA use cases: reporting & master data uploads [56:51] – Real success story: automating bank deal confirmation processing [01:09:13] – Future of Treasury: combining RPA with AI for decision-making [01:10:51] – Where to learn more: UiPath, Microsoft, and Automation BoutiqueFollow Philip Costa Hibberd & Jan-Willem AtteveltWebsite: https://www.automationboutique.com/Philip Costa Hibberd on LinkedIn: https://www.linkedin.com/in/philip-costa-hibberd/ Jan-Willem Attevelt on Linkedin: https://www.linkedin.com/in/attevelt/ Follow Corporate Treasury 101:Website:
Justin's migrating off Freshdesk, grumbling about outdated UIs and dreaming of integrated tools. He's knee-deep in Shopify, site redirects, and pondering injection moulding and SMED like it's a lifestyle. Meanwhile, Jem's grappling with KittaParts hardware complexity, designing 19” rack kits, inventing dowel-pivot hinges, and vibing with NotebookLM podcasts about ABB RAPID while secretly plotting to laser-engrave everything.Watch on YoutubeDISCUSSED:✍️ Comment or Suggest a TopicFreshdesk "free" endingDisposable software ꘎ Laura 3D viewerHenry Holsters Tour - Audacity Micro ⠄ BEPNo door APC BrothersSMED (Single Minute Exchang of Dies)Lean stationShipping areaLLM robot programming ꘎Best part of owning business ⠄Urg. Laura says the coffee.Phew Shopify back to normal ꘎PDX CNC DistributorsAPSX PIMChatter Sync update Masso vs MulticamFricken Lasers ꘎ Notebook LM ꘎ - Learn ABB robotsCities Skylines with JustinFactorio - Factorio does not fill the niche. It creates the niche.---Profit First PlaylistClassic Episodes Playlist---SUPPORT THE SHOWBecome a Patreon - Get the Secret ShowReview on Apple Podcast Share with a FriendDiscuss on Show SubredditShow InfoShow WebsiteContact Jem & JustinInstagram | Tiktok | Facebook | YoutubePlease note: Show notes contains affiliate links.HOSTSJem FreemanCastlemaine, Victoria, AustraliaLike Butter | Instagram | More LinksJustin...
Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input. A key enabler is Google's newly announced open Agent2Agent (A2A) protocol, which allows AI agents from different vendors to communicate and collaborate securely across platforms. Over 50 companies, including PayPal, Salesforce, and Atlassian, are already adopting it. However, deploying agentic AI at scale requires more than individual tools—it demands an AI platform with runtime frameworks, UIs, and connectors. These platforms allow enterprises to integrate agents across clouds and systems, paving the way for AI that is collaborative, adaptive, and embedded in core operations. As AI becomes foundational, developers are transitioning from coding to architecting dynamic, learning systems.Learn more from The New Stack about the latest insights about Agent2Agent Protocol: Google's Agent2Agent Protocol Helps AI Agents Talk to Each OtherA2A, MCP, Kafka and Flink: The New Stack for AI AgentsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Askari Metals Ltd (ASX:AS2) executive director Gino D'Anna talked with Proactive's Stephen Gunnion about the company's expanding exploration footprint in Africa. Askari Metals Limited is targeting opportunities across lithium, tin, tantalum, rubidium and gold, with a strategic focus on Namibia and Ethiopia. Executive director Gino D'Anna said the company's Uis project in Namibia remains its flagship asset due to its proximity to the long-standing Uis Tin Mine and access to key infrastructure including deepwater ports. “What we're seeing at Uis from an exploration perspective is large, scalable pegmatite, which are providing polymetallic mineralisation,” D'Anna said, noting strong results including up to 4% tin and nearly 6,000 ppm tantalum. D'Anna highlighted that Uis remains significantly underexplored and is now showing potential beyond lithium, with attention shifting to tin, tantalum, and rubidium. On Ethiopia, he described it as an exciting, underexplored jurisdiction with favourable mining legislation. Askari is exploring the Wayu Boda area along the Adola Greenstone Belt, which lies near multi-million-ounce gold deposits. D'Anna added that Ethiopia presents “the land of giants,” and offers strong geological promise. He also flagged active due diligence on advanced brownfields gold projects as part of the company's strategy to build out its gold position. D'Anna said investors can expect steady news flow from both Namibia and Ethiopia as drilling and early exploration works ramp up. Visit Proactive's YouTube channel for more interviews like this one. Don't forget to like the video, subscribe to our channel and turn on notifications for future updates. #AskariMetals #GoldExploration #TinMining #NamibiaMining #EthiopiaGold #CriticalMinerals #JuniorExploration #AfricanMining #Rubidium #Tantalum #GreenstoneBelt #ASXStocks #MiningInvesting #MineralExploration
Can AI really design your home? 800,000+ users have already ditched their architects thanks to Maket, a startup using AI to simplify renovations from floorplans to permits.In this Spotlight episode, Maket founder Patrick Murphy delivers a mock version of the real pitch that secured $2M in funding from top investors including Reid Hoffman's Blitzscaling Ventures. Maket's platform uses generative AI to help homeowners and builders create zoning-compliant floorplans, visualize renovations, and streamline the entire design process—all without needing an architect.We break down what made Maket's pitch so compelling, challenge some of Patrick's assumptions, and unpack the real-world lessons for other founders pitching complex AI products. Along the way, we discuss the difference between simplifying a process and truly disrupting an industry—and why most tools built 'for professionals' fail to scale.In this episode, you will:Learn how Maket is automating architecture using generative AIHear how Patrick convinced VCs with a story-first pitchUnderstand the strategic value of building a proprietary modelExplore the pitfalls of targeting industry incumbents too earlyDiscover how agent-based UIs are replacing SaaS 2.0See why traction and team matter more than early monetizationGet tactical advice on presenting complex ideas to investorsIf you're building in AI, pitching a vertical SaaS product, or just want a front-row seat to a startup reinventing home design—this one's for you.The Pact Honor the Startup Podcast Pact! If you have listened to TSP and gotten value from it, please:Follow, rate, and review us in your listening appSubscribe to the TSP Mailing List to gain access to exclusive newsletter-only content and early access to information on upcoming episodes: https://thestartuppodcast.beehiiv.com/subscribe Secure your official TSP merchandise at https://shop.tsp.show/ Follow us here on YouTube for full-video episodes: https://www.youtube.com/channel/UCNjm1MTdjysRRV07fSf0yGg Give us a public shout-out on LinkedIn or anywhere you have a social media following Key linksThe Startup Podcast is sponsored by Vanta. Vanta helps businesses get and stay compliant by automating up to 90% of the work for the most in-demand compliance frameworks. With over 200 integrations, you can easily monitor and secure the tools your business relies on. For a limited-time offer of US$1,000 off, go to www.vanta.com/tsp .Get your question in for our next Q&A episode: https://forms.gle/NZzgNWVLiFmwvFA2A The Startup Podcast website: https://www.tsp.show/episodes/Learn more about Chris and YanivWork 1:1 with Chris: http://chrissaad.com/advisory/ Follow Chris on Linkedin: https://www.linkedin.com/in/chrissaad/ Follow Yaniv on Linkedin: https://www.linkedin.com/in/ybernstein/Producer: Justin McArthur https://www.linkedin.com/in/justin-mcarthurIntro Voice: Jeremiah Owyang https://web-strategist.com/
TanStack, a collection of popular open-source software libraries, is back in the news cycle this week with the announcement of TanStack DB. TanStack DB extends TanStack Query with collections, live queries, and optimistic UI mutations to keep UIs reactive, consistent, and lightning fast.VS Code marks its 100th release of v1 with updates like: enabling Next Edit Suggestions (NES) be default, adding custom instructions and reusable prompts for a chat agent inside a project's .github folder, and new tools at the AI agent's disposal for better results.There's a new component library available called Basecoat UI that claims to bring the magic of shadcn/ui with no React required. No matter if a website's built using HTML, Flask, Rails, or another JS framework, Basecoat uses HTML and Tailwind, and a hint of Alpine.js when needed, to provide accessible, modern components that are also compatible with shadcn/ui themes.News:Paige - Basecoat UI - framework agnostic component libraryJack - TanStack DBTJ - VS Code 1.100Bonus News:Apparently we should all just f'ing use HTMLParcel v2.15 jumps on the Rust bandwagonGoogle is testing a new “AI Mode”Google's logo changeMax once again becomes HBO MaxWhat Makes Us Happy this Week:Paige - House of Earth and Blood (#1 in Crescent City series) Jack - Grand Sumo May 2025 TournamentTJ - Coast of MichiganThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or talk to us on X, Bluesky, or YouTube.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.comSubscribe to our YouTube channel @Front-EndFirePodcast
Sometimes you see a product that you think might just have a chance in this weird comics ecosystem to make a positive dent in creator sales numbers and I think I've just found one. There's a new kid in town in the digital comics reader space, Sweet Shop. Kenny Meyers is the mastermind behind Sweet Shop; a digital reader platform with a slick interface; Bindings, a simplified comic creator portal designed to make connection and direct sales simpler; and Find Your Comic Store where you can use an online map to do exactly as the name suggests. All great new products with fantastic UIs that make reading digital books a more enjoyable experience and to help you, and creators, connect in sleek ways never seen before. Kenny calls Bindings a fancy LinkTree, but he's being humble. He joins me today as a first time guest on the show to talk about it all including the exciting topic of metadata...ok, maybe not exciting but important. Seriously, you are adding in metadata, right? Make sure to check out their website and sign up for the newsletter to be on the front end of things later in 2025 as more publisher announcements will be hitting soon. PATREON We have a new Patreon, CryptidCreatorCornerpod. If you like what we do, please consider supporting us. We got two simple tiers, $1 and $3. Want to know more, you know what to do. ARKENFORGE Play TTRPG games? Make sure to check out our partner Arkenforge. Use the discount code YETI5 to get $5 off your order. THE LANTERN CATALOG Created on the premise of creating light in the dark, this is the the go to resource to keep you up to date on the indy projects and the creators you love. You can find them at https://www.thelanterncatalog.com/. Make sure to check out our sponsor 2000AD. Learn more about your ad choices. Visit megaphone.fm/adchoices
We talk a lot about AI, particularly how it affects UIs and functionality (or doesn't), and its possible use in moviemaking.The WannaPractice app: www.wannapractice.com Use the code “WP20PER2024” at registration for 20% off the regular price.The WannaBeA training material: wannabeacissp.comMany thanks to co-host Matt Snoddy! Go buy consulting services from him: http://www.networktherapists.com/You can now sponsor The Sensuous Sounds Of INFOSEC! Buy us a gallon of gas here: https://www.buymeacoffee.com/securityzedVisit our friend and co-host Raphty here: https://raphty.com/
Codey and Kevin talk through all the recent news. Also bugs again. Timings 00:00:00: Theme Tune 00:00:30: Intro 00:02:17: What Have We Been Up To 00:19:17: I Know What You Released Last Month 00:26:44: Upcoming Game Releases 00:30:40: Game Updates 00:40:52: New Games 00:51:08: Other News 01:12:01: Outro Links Harvest Moon Double Pack for Switch Release Date Cattle Country Release Date To Pixelia Release Hello Kitty Island Adventure “Friends, Furniture, and Frozen Peaks” Update Hello Kitty Island Adventure Month of Meh Farlands “0.5” Update Sunseed Island Starsand Island Mudborne Soundtrack Tales of the Shire Store Wholesome Direct ConcernedApe Interview New Lego Animal Crossing Sets Contact Al on Mastodon: https://mastodon.scot/@TheScotBot Email Us: https://harvestseason.club/contact/ Transcript (0:00:30) Kev: Hello farmers and welcome to another (0:00:33) Kev: Contractually obligated episode of the harvest season that’s not true at all. There’s no contracts whatsoever (0:00:36) Codey: Not true. (0:00:40) Kev: By no one, have you ever seen Paul board and the mall cop movie? (0:00:45) Codey: Uh, for not for a hot minute. (0:00:49) Kev: Well, I’m just reminded like he he there’s this plaque (0:00:53) Kev: You know mall security whatever and he just made it himself for himself and that’s all that I’m thinking (0:00:57) Codey: Okay. (0:01:00) Kev: Contract I drafted for myself. Oh (0:01:03) Kev: Anyways, hello. My name is Kevin (0:01:05) Codey: And I am Cody. (0:01:07) Kev: And we are here today to talk about cottagecore games per the first section for the party of the first part. Oh, yeah (0:01:12) Codey: A-wooo! (0:01:15) Codey: Ow-ow-ow! (0:01:17) Kev: You go (0:01:19) Kev: I don’t know Rick flair, but I feel like I should try to mimic kids. I might be mimicking these woo already (0:01:25) Kev: I don’t know, but I just know he does the woo (0:01:28) Codey: No, I, I do not, I do not wrestle. (0:01:28) Kev: You don’t talk about wrestler guy (0:01:30) Kev: He does booze (0:01:32) Kev: Yeah, there’s a wrestler. He’s kind of an older guy. I think he’s actually like running WWE now, but anyways (0:01:38) Codey: That’s a choice. (0:01:39) Kev: Hello everyone (0:01:42) Kev: It is (0:01:44) Kev: Okay today, it’s all it’s just another news episode (0:01:48) Kev: Life is chaotic. Cody actually managed to get on thankfully after actually surviving the wilderness for this past weekend (0:01:52) Codey: Mm hmm. Mm hmm. I will. We’ll talk we’ll talk about that. I can. Yeah. So it’s big, big old news episodes. We got lots of stuff to talk about. We this is the first of this month. So this will be our I know what you released last month. Boo. Episode as well. But what we have been up to. So this farmers has been a (0:01:57) Kev: but it (0:01:58) Kev: Hey, Oleg. (0:02:00) Kev: Well, do you want do you want to open because obviously okay. All right, let’s just let’s get into it (0:02:16) Kev: Yeah. (0:02:22) Codey: wild ride. So this last week, on Tuesday, I was working at the (0:02:28) Codey: Wildlife Center, it was going great, no issues. And we we (0:02:32) Codey: heard that there was going to be this huge storm coming through. (0:02:35) Codey: And that there’s probably going to be a lot of animals like (0:02:38) Codey: thrown out of trees. And so we were probably going to get a lot (0:02:41) Codey: of animals from people. And just kind of be ready for that. And (0:02:45) Codey: we were like, Okay, and then, like seven o’clock, power goes (0:02:52) Codey: on. And I had gone outside at one point, because we knew this (0:02:55) Codey: storm was happening. But we the our wildlife center is in the (0:02:57) Codey: basement of the owner’s house. Answer. p.m. p.m. crucial (0:02:58) Kev: Okay, wait question you say it’s 7 o’clock a.m. Or p.m.. Okay. All right. This is all right (0:03:05) Kev: There’s still light at that time, but okay. Yeah, all right (0:03:06) Codey: information. Yeah. So I had gone outside, just like poked my (0:03:10) Codey: head outside. And it sounded like a frickin jet engine. It (0:03:13) Codey: was so loud. And I was like, Yep, it sure is storming. And I (0:03:17) Codey: told the other people to go check it out. They poked their (0:03:20) Codey: heads out. They were like, “Yep, sure, it’s working.” (0:03:22) Codey: And so we were like, “Okay.” And then, yeah, not long after that, power goes out. (0:03:28) Codey: And so we’re feeding, we’re like laughing about it, whatever, we have like little lights and everything. (0:03:33) Codey: We’re feeding baby squirrels with lamps with like headlamps on and stuff and doing just us, I think. (0:03:36) Kev: Very cute. (0:03:37) Kev: The pictures, I’ve seen the pictures. (0:03:42) Kev: Wait, did you have the lamps (0:03:43) Kev: or did the squirrels have the lamps? (0:03:44) Kev: Both. (0:03:46) Codey: Yeah, it was just us. But there were like little, the owner has like lanterns everywhere. (0:03:51) Codey: We found out that the. (0:03:52) Codey: Um, sinks are run. (0:03:56) Codey: There’s a pump that takes the stuff from the sinks. (0:03:59) Codey: So then we weren’t even able to do dishes. (0:04:02) Codey: So then we were just kind of like sitting there waiting for her to put the backup (0:04:06) Codey: generator on took like 20, 30 minutes. (0:04:08) Codey: And then when we had backup generator, it was great. (0:04:10) Codey: Like no issues. (0:04:12) Codey: Um, Jeff, my partner texts me and is like, Hey, we don’t have power. (0:04:15) Codey: I’m like dope. (0:04:17) Codey: When I leave, I see all of the messages and basically it was a derecho storm (0:04:23) Codey: I don’t know how you say it. (0:04:24) Codey: We had like 90 mile an hour winds and there were trees, um, that were (0:04:30) Codey: straight up uprooted and like power lines that went across the road. (0:04:32) Kev: Oh, that’s sick. (0:04:38) Codey: Uh, animals were indeed. (0:04:38) Kev: Animals were indeed thrown out of the trees (0:04:41) Kev: with said trees. (0:04:42) Codey: Yes, they were. (0:04:43) Kev: With said trees. (0:04:45) Codey: We, I have not worked since that day and I, the messages have been going crazy, (0:04:50) Codey: but I have had other stuff to do. (0:04:52) Codey: So yeah, we’re out of power and we’re like, okay, this is probably just like, (0:04:57) Codey: they just got to put it back up every now and then we’ll go out of power (0:04:59) Codey: for like 10 or 15 minutes. (0:05:00) Codey: It’s not a big deal. (0:05:01) Codey: Um, nope. (0:05:03) Codey: Next morning we wake up and on our like weather outage app, it’s like, uh, (0:05:09) Codey: time estimated time to restoration, to restoration of, of power unknown. (0:05:16) Kev: Big old shrug (0:05:18) Codey: Yeah. (0:05:18) Codey: Big old shrug. (0:05:19) Codey: And they, it was like right down the street, like there was a power line (0:05:22) Codey: that was just straight up across the road. (0:05:24) Codey: And it was like that for a day and a half. (0:05:28) Codey: So like over 200,000 people in Western Pennsylvania, we’re just like out of power. (0:05:34) Codey: Um, I was out of power for two full days. (0:05:37) Codey: We had to throw away all of our food. (0:05:39) Codey: Um, and I was like really grumpy because I had planned on doing all of my chores (0:05:45) Codey: that Wednesday, I was so excited. (0:05:46) Kev: Mmm, mmm. (0:05:48) Codey: that includes like vacuuming and dishes and laundry. (0:05:53) Codey: I couldn’t do any of that stuff. (0:05:54) Codey: And yeah, it was like the whole town, except for they kicked power (0:05:57) Codey: on for like some of the businesses, some of the grocery stores. (0:06:00) Codey: Cause they knew everyone was going to need grocery store stuff. (0:06:02) Codey: So they made sure that those lines were up and that the like hospital was up and (0:06:07) Codey: stuff, but most of the residences didn’t have power for at least two days. (0:06:12) Codey: So that was crazy. (0:06:13) Codey: That was, so that was a reason, um, when they were like, Oh, you (0:06:17) Codey: want to do news this week? (0:06:18) Codey: I was like, I still have a lot of stuff to get caught up. (0:06:22) Kev: Yeah, well, appreciate it, you know, that you managed to strap the baby squirrels to (0:06:23) Codey: I don’t know, but it’s fun. (0:06:25) Codey: Um, (0:06:30) Kev: a hamster wheel to power your computer to record this. (0:06:31) Codey: yeah, yeah, they, some of them, the red squirrels would love that actually. (0:06:34) Kev: I’m sure they would. (0:06:38) Codey: Um, yeah. (0:06:39) Codey: So I’m, I did that. (0:06:40) Codey: That was me for two days. (0:06:41) Codey: I was straight up roughing it. (0:06:43) Codey: Um, also I went to a nursery today at plant nursery and I got four different (0:06:50) Codey: species, not species, four different varieties. (0:06:52) Codey: of peppers. I got some herbs and I got some tomatoes so it about to be in real (0:06:56) Kev: ooh peppers okay (0:07:03) Codey: life farming for me. Yeah. (0:07:05) Kev: all right that’s good stuff so you know back back in many many moons ago when I actually (0:07:12) Kev: lived in a house with the backyard um we we had a garden and we had and all basically all those (0:07:18) Kev: things um tomatoes and peppers I forget the other one but um but those are good ones to grow they’re (0:07:20) Codey: Yeah, and I also got kale as well. Yeah, so those are all going to grow in my back. I have like a little area that’s kind of closed off. So I’m going to give it a try because we got a shot, a glimpse of what it looks like to be in a post-apocalyptic scenario with no power and having to live on our own, you know. (0:07:23) Kev: hardy and you can get a lot of them, you know. (0:07:26) Kev: There you go. (0:07:43) Kev: Mm-hmm. Off the land. Yep. Yeah. Well, um, that’s pretty, well, the power outage thing’s (0:07:50) Codey: Like off the land. So I also, I wanted to garden anyway, but it’s a joke. (0:08:00) Kev: not cool. Like, that’s wild. I’ve never had an, I’ve extended power outage, I guess, like (0:08:04) Codey: Yeah, it was it was cool for a while, I just realized like I listeners I entreat you to take a moment, you can pause this for like a minute or something after I post this, think about what you do that requires electricity, and then just don’t touch that for a day. (0:08:05) Kev: couple hours. So that’s, that’s a lot to hear, but I’m glad you’re out of it. Okay. Son’s (0:08:26) Kev: I mean, yeah, everything. (0:08:29) Kev: We’re the brain rot termly online. (0:08:34) Kev: But even aside from that, yeah, I’m (0:08:37) Kev: living in an apartment complex. (0:08:39) Kev: Everything is electric for me. (0:08:40) Codey: Yeah, I got all like yep alt might our stove is electric all of our lights clearly are electric (0:08:42) Kev: Even my stove, I’d be out. (0:08:44) Kev: Yep. (0:08:49) Codey: Couldn’t do any cleaning couldn’t do like I was trying to use my phone as little as possible (0:08:56) Codey: but I could take it like I could jump in my car and like (0:08:56) Kev: Mm hmm. Mm hmm. Yeah. Yeah. (0:09:00) Codey: Drive around but I’m sure there were probably gas shortages because everyone was trying to power their generators (0:09:07) Codey: But yeah, I was like wow a lot of my life is (0:09:11) Codey: That so I did a lot of yard work (0:09:13) Kev: Yeah, yeah, I’m sure the the challenge I think would for at least in my scenario would be the food because (0:09:20) Kev: You know, we we’re you can’t go out because restaurants are gonna be down too, right? So (0:09:26) Codey: A lot of the restaurants were down and then the second day when some of the stuff had jumped back up, all the restaurants were swamped because everyone was going out to the restaurants. (0:09:34) Kev: Yeah, of course, of course, of course, yeah, but yeah (0:09:38) Codey: Yeah, so you can’t, you can’t really do that. (0:09:42) Kev: So you did non (0:09:43) Kev: perishable stuff that’s ready to eat and won’t go bad. So like, I hope you have a lot of fruit, (0:09:44) Codey: Yep. (0:09:48) Codey: It was boring. (0:09:50) Kev: I guess a cereal I don’t know because you can’t even have the cold milk. Nevermind. (0:09:52) Codey: I can’t yeah our milk went bad real fast, so. (0:09:56) Kev: Yeah, no. Um, yeah, I don’t even like break out the the spam. I don’t even know. (0:09:56) Codey: Yeah. (0:10:06) Codey: I pretty much, I got chips. (0:10:07) Kev: Can you eat spam out of the can? Yeah. (0:10:09) Codey: I got like a thing of chips. (0:10:10) Codey: So like of little individual packages of chips. (0:10:13) Codey: So I have that. (0:10:14) Codey: I bought Chewy bars, like granola bars. (0:10:16) Kev: Oh, all of those are good. Yep. One of the bars. That’s good. Yeah. (0:10:19) Codey: And that’s all I bought. (0:10:22) Codey: Because then after that, we just got like some fast, (0:10:26) Codey: but even the fast food lines were like so long. (0:10:30) Kev: Yeah, of course. Of course. (0:10:30) Codey: It was crazy. (0:10:33) Codey: Yeah. (0:10:33) Codey: The only other thing that I’ve been up to (0:10:37) Codey: Breath of the Wild. (0:10:39) Codey: And then now that we have power again, (0:10:41) Codey: suddenly there’s a new season of Fortnite (0:10:43) Codey: and it is all Star Wars themed. (0:10:46) Kev: Oh, is is glub shadow in it? (0:10:47) Codey: So I don’t know what that is. (0:10:51) Codey: I’m not a Star Wars human. (0:10:53) Codey: I don’t like Star Wars. (0:10:54) Codey: So you’re gonna send me a picture. (0:10:56) Kev: OK, hold on, let me just pronounce it correctly. (0:10:59) Kev: No, no, it’s it’s so it’s a it’s a meme (0:11:05) Kev: just because. (0:11:07) Kev: Yeah, because obviously Star Wars have all these goofy dumb names or whatever. (0:11:12) Codey: Mm-hmm (0:11:12) Kev: There’s the running gag of Guelp’s shadow of just being a made-up name-slash- (0:11:16) Kev: character and always saying, “Oh, I love Guelp’s shadow. He’s my favorite Star Wars (0:11:20) Kev: character just because he sounds like a real one.” But yeah, Star Wars. I mean, that makes sense that (0:11:22) Codey: Got it, okay, cool (0:11:28) Kev: the third one movie came out. I’m not that big of a Star Wars person, but yeah, sure. Why not? (0:11:34) Codey: Yeah. So that is the short version of what I expected to do. Oh, what have you got, Toucan? (0:11:35) Kev: I mean, it’s cool. I respect it. I just, I’ve never really watched much, many of the movies. (0:11:42) Kev: Yeah. (0:11:49) Kev: She didn’t get into how she had to kill the bear for survival. (0:11:54) Codey: Yeah, didn’t get into any of that. Or my knitting. I’ve been knitting a lot, but it’s fine. You (0:11:58) Kev: Ooh, ooh, ooh, ooh, good stuff. All right. Well, over here, I have had power. So, (0:11:58) Codey: You have to go to the Slack for that information, kids. (0:12:08) Kev: you know, a different experience. Going to roll that in. Just going to show off all the power I (0:12:10) Codey: - Wow. (0:12:12) Kev: have here. Okay. Okay. (0:12:16) Kev: So it’s been busy week. So it’s endless zone zero. I mean, I’ve played daily. I don’t talk about it much because most of the time it’s like it’s a new character I like about, but nobody cares about. So whatever. But this past update this past week or the week before was a big one. Because it was the final, final update for season one stuff. Season one had this whole storyline plot lines connected. And so the next update, we’re going to get a whole new (0:12:46) Kev: batch of stories and characters and all new no, no, no dangling threads, really. So it’s good. That’s, that’s kind of cool. And the character that dropped with this update, her name is Vivian. She is a kind of like Victorian gothic, you know, big, not super big. She’s got a dress with a big, like, almost hoop skirt looking thing and an umbrella, you know what I’m talking about. And she’s got pointy ears. So (0:13:16) Kev: it’s kind of vampire ish gothic, like I said, style that they’re going for her. And so she’s fun. But what I really like about her is that under the dress, there’s actually a bunch of rockets strapped under there. So she’ll launch yourself into the sky, and then launch yourself down at people, which is really funny. So yeah, she’s she’s great. I love Vivian. She’s also hilarious, because she’s something of a fanboy of the main character that you’re playing as. And it’s really, really funny. (0:13:26) Codey: Okay. (0:13:44) Kev: But anyways, yeah. (0:13:46) Kev: I’m just they did show a trailer for season two and there’s all sorts of new characters and stuff including a playable panda, which I’m looking forward to but but yeah, that’s that’s that’s my gotcha corner for myself. (0:14:01) Kev: Let’s see the other okay I picked up a new game this week actually. (0:14:05) Kev: rat topia do you do you remember this one? Oh, we’ve talked about it on the show. I don’t know if I’m in the episode you were on, but are you familiar. (0:14:06) Codey: Mm-hmm (0:14:11) Codey: Yeah, yeah (0:14:14) Kev: So, yeah, so for people who may not. (0:14:16) Kev: I don’t remember, it’s very Terraria like, right, like the 2D slice of earth, you see everything, but you control a rat princess queen, and you are establishing a new rat city village kingdom thing. (0:14:34) Kev: So it’s great because it’s, again, very Terraria like, but as the leader, you get to command the people who move into your town, you actually get to command them to go. (0:14:46) Kev: Do this job, do, you know, mine here, do this, you’re now the logger, whatever. (0:14:48) Codey: Hmm. (0:14:52) Kev: So that part is really cool. That really excited me because I love when you actually get to command people and they do stuff in these games, right? (0:14:58) Codey: Yeah. (0:15:00) Kev: And I like rats. The art style is cute. But, you know, I like little critters in general. So that was fun. (0:15:08) Kev: And the game’s good, but it also can be really hard. It turns out running a kingdom is really hard because sometimes. (0:15:12) Codey: Mmm (0:15:16) Kev: You have to defend from zombie rats invading your town. You have to manage an economy. Do you know how fun that is to figure out how much you should be taxing or subsidizing? (0:15:26) Codey: Nope. (0:15:28) Codey: That does not sound fun. (0:15:30) Kev: It’s wild. I’ve had to start over a few times because I just backed myself into a corner of unsavable. I destroyed the kingdom. (0:15:42) Kev: So, yeah, that, you know, kind of like the real life car, I mean, it’s just… (0:15:46) Kev: I can just restart the politics, yeah. (0:15:53) Kev: But yeah, that’s Rhetopia. (0:15:54) Kev: Overall, it’s a great game, two thumbs up from me. (0:15:58) Kev: I hear other people are interested in this game (0:16:00) Kev: and may talk about it and play it (0:16:02) Kev: and discuss it at length somewhere. (0:16:05) Kev: So yeah, keep an eye out for that. (0:16:08) Kev: And the other one, so lastly, not game specific, (0:16:14) Kev: But, uh, oh. (0:16:16) Kev: Uh, this past, a couple of days ago, I went, I went out to the club. (0:16:20) Kev: I don’t go very often, but I liked the club. (0:16:22) Kev: Um, I liked to dance. (0:16:24) Kev: Um, I, yeah. (0:16:24) Codey: Okay, awesome! (0:16:26) Kev: Um, so it was a solo trip. (0:16:29) Kev: It was just me. (0:16:31) Kev: Um, I was like, you know what? (0:16:32) Kev: I’ve been, I’ve had a very stressful couple of weeks. (0:16:34) Kev: I was like, you know, I’m going to get away for a day and go for a night out. (0:16:38) Kev: And then I had fun or whatever. (0:16:40) Kev: Um, I went to the, uh, club by the. (0:16:46) Kev: Area I went to is very college heavy. (0:16:48) Kev: Um, so, um, I, at first when I got there, I think I got there, no, not I think (0:16:48) Codey: Mm-hmm. (0:16:53) Kev: I did get there a little too early. (0:16:54) Kev: So I thought, you know what? (0:16:55) Kev: Maybe this was a bad call. (0:16:56) Kev: Cause you know, the semester’s are ending for colleges. (0:16:59) Kev: I don’t know. (0:16:59) Kev: People aren’t going to be there. (0:17:00) Kev: Um, but it did populate eventually. (0:17:03) Kev: Um, and now I’m only 30, what am I two? (0:17:04) Codey: Mm-hmm, mm-hmm, yeah, yeah, it doesn’t matter anymore. (0:17:08) Kev: No, yeah, 30. (0:17:09) Kev: Yeah, 30. (0:17:09) Kev: Yeah. (0:17:09) Kev: No, I forget after 26, I lose track. (0:17:12) Kev: Um, yeah, yeah, right. (0:17:14) Kev: So I’m. (0:17:16) Kev: Not, yeah, yeah. (0:17:17) Kev: So I’m not old, but especially in this college heavy area, I feel old (0:17:20) Kev: man at the club, just, just a wee bit. (0:17:24) Kev: I’m laying out there in my slacks and a little vest. (0:17:26) Kev: I’m looking, you know, I’m feeling snappy, but everyone else are the (0:17:30) Kev: majest majority of people are college aged kids with t-shirts and the shorts (0:17:34) Kev: and whatever, so I’m sticking out. (0:17:36) Kev: Um, but, uh, but overall I still had fun. (0:17:39) Kev: Um, I just, okay, here’s my biggest old man thing. (0:17:44) Kev: All right, embrace yourself. (0:17:44) Codey: Mm-hmm. (0:17:46) Kev: Um, I, um, so I’m a single guy, right? (0:17:46) Codey: Okay. (0:17:49) Kev: And, you know, obviously with heading out to the club, one of the things is like, (0:17:53) Kev: you know, maybe I’ll find someone to dance with, right? (0:17:56) Kev: Cause that’s, that’s just the thing. (0:17:57) Kev: Or so I thought because, uh, I struck out all night and that’s, that’s fine. (0:17:58) Codey: - Mm-hmm. (0:18:02) Kev: That happens. (0:18:03) Kev: And you know, whatever, but after talking to other people, apparently the scene (0:18:07) Kev: has kind of changed, like everyone’s just going up in groups or, or people very (0:18:14) Kev: We rarely actually (0:18:16) Kev: like reach out to strangers or accept the strangers to dance with and apparently you (0:18:23) Kev: gotta like you know meet up beforehand or you know you’re meet up online and decide (0:18:28) Kev: to do so or whatever but that’s just that’s that again that’s just me old manning here (0:18:32) Kev: like I’m I’m just shocked to see that the the the scene has changed as it will or as (0:18:39) Kev: it were (0:18:39) Codey: - Dating, dating is awful. (0:18:43) Codey: Yeah, I, when I was dating, (0:18:46) Codey: even like five years ago, I hated it. (0:18:48) Codey: - Yeah, no, it was awful. (0:18:50) Kev: Yeah, yeah, but um, but yeah, that’s just like I’m I’m too on hip to (0:19:00) Kev: Yeah, I’m I’m out of it. I’m it’s it’s all the zoomers. I can’t keep up with them (0:19:07) Kev: That their fortnight dances in the floss (0:19:11) Kev: All right, that’s all I got (0:19:15) Kev: Alright, let’s get to let’s get to I know what you’re at least (0:19:20) Kev: last month or monthly segment where we talk about stuff that just came out, but before we do, (0:19:23) Codey: No, I didn’t know there was a new one. (0:19:24) Kev: did you see the trailer for I know what you did last summer the new one? (0:19:30) Kev: Yeah, our you know, obviously what we’re riffing off here for the title segment that movie got a (0:19:37) Kev: I don’t sequel reboot thing it’s it’s it’s it’s that trend where it’s reusing the original title (0:19:44) Kev: just called I know what you did last summer and it’s you know the same premise but oh look there’s (0:19:50) Kev: original character who is it Sarah Michelle Keller I think it is I don’t remember the original actress (0:19:56) Kev: from the original oh and she’s there older and she’s gonna be helping out and and you know it’s (0:20:01) Kev: it’s just the rehash that Hollywood has been doing with all especially a lot of horror movies it feels (0:20:05) Kev: like but yeah I just just want to point out that I got a trailer I haven’t watched any of the other (0:20:12) Kev: ones so whatever but it’s just interesting (0:20:14) Codey: Yeah, I don’t mind like when they do that stuff. So I really love the scream franchise. (0:20:20) Kev: yeah the scream one seemed to do I hadn’t seen the newer ones but that seemed pretty well done how (0:20:26) Kev: they did it yeah all right there you go that first scream is really good like I get it when (0:20:28) Codey: So yeah, 10 out of 10. (0:20:36) Codey: Yeah, we. (0:20:36) Kev: I watched it the first time I was like oh okay I get it um okay uh all right not for games that (0:20:45) Kev: Come on, non-horror games, Cottage Court games, if you don’t like those, do you? (0:20:45) Codey: Mm-hmm, yeah. (0:20:50) Kev: If you like those movies, you might like this. (0:20:52) Kev: Alright, first of all, Bugaboo Pocket, what I call the Bugagotchi game, because it’s 2D pixel style, very detailed, very gorgeous looking sprite work of bugs, and you pet them and do games with them. (0:21:02) Codey: Mm hmm. It was so good. I’m like still so tempted to get that but I will I will I will hold off. (0:21:18) Kev: There’s, there is a lot I will say. There’s Tarot, there’s also it looks like a Fruit Ninja game, and a lot of petting of bugs, they’re very cute but yeah this, yeah, it looks, well maybe you should get the full release so, so it is as long as you want it to be. (0:21:26) Codey: Mm-hmm (0:21:31) Codey: Yeah, I played the demo the demo was super good it’s just wasn’t as long as I wanted it to be (0:21:43) Kev: Oh, if you look on their Steam page, they have a big, like, Nintendo seal of a period. (0:21:48) Kev: So now on the flip side, something that is more pocket, as in Polly Pocket, because Tiny (0:22:09) Kev: Garden came out. The Polly Pocket game, or inspired game, it is a game, not a physical. (0:22:18) Kev: Thing that you can open, and you garden inside your little Polly Pocket world, and you kind of (0:22:23) Kev: rearrange and decorate and all that good stuff. I have not played this game. It looks a lot like a (0:22:32) Kev: game called Garden Galaxy that I did play, which was fine. I think it looks very cute, this Tiny (0:22:38) Kev: Garden. It looks well done, but yeah, go check that out if you’re interested. The gardens are (0:22:46) Kev: So very cute, I will say that. (0:22:49) Kev: Um, let’s see here. (0:22:51) Kev: Next up, we have Opidum, I think. I think I say it different every time. (0:22:54) Codey: Mm-hmm. (0:22:57) Kev: It is the, what I call, multiplayer pow world-esque equivalent of Breath of the Wild, because it’s open world sandbox key. (0:23:09) Kev: And you know, it has all your usual survival, whatever. (0:23:13) Kev: Um, but the combat, the combat is just so very clearly Breath of the Wild-ish. (0:23:18) Kev: Feeling, um, which is fine because, you know, that’s a good game to take inspiration from. (0:23:24) Kev: But, uh, but yeah, um, I, you know, a full 3d game like this being multiplayer co-op, (0:23:31) Kev: that is pretty cool. I will give them that. Um, so I don’t know if I’m going to try it just because (0:23:36) Kev: I don’t have room for it right now, but, uh, good on, uh, good on, uh, E.P. games is the (0:23:42) Kev: dev. He came out with it and it’s finished. Um, well, I say it’s finished. It’s really (0:23:48) Kev: released. Um, yeah, and multi multiplayer Breath of the Wild, they’re, they’re just (0:23:54) Kev: sound like a neat niche for that. So, um, yeah, good on them. Uh, and then let’s see, (0:24:00) Kev: lastly on our list, we have Dean Come 1.0. This is our survive, again, survival-less life sim game, (0:24:09) Kev: whatever. Uh, but this time you’re, uh, you’re in Australia. Um, so you farm, hunt, mine, fish, (0:24:14) Kev: all that good stuff do do all the Australian things is our (0:24:18) Kev: friend mark and it may be because as we all know, (0:24:21) Kev: Australia is a small little island nation and almost really (0:24:25) Kev: the people they all know each other and just they basically (0:24:26) Codey: They should, they all know each other. (0:24:31) Kev: walk by each other every day on their way home. But anyways, (0:24:36) Kev: but yeah, look, Australia is very cool. I always I adore like (0:24:39) Kev: reading about as a kid who loved wildlife and animals and Steve (0:24:43) Codey: Mm-hmm. (0:24:44) Kev: Irwin, right? Like Australia was very big in the 90s. And I’ve (0:24:46) Codey: Mm-hmm. (0:24:47) Kev: read and bought a lot of (0:24:48) Kev: Australia. (0:24:49) Codey: Have you seen some of the new photos that Robert Irwin posted? (0:24:54) Kev: No, what does he do? (0:24:54) Codey: Mm-hmm. (0:24:56) Kev: What is he doing? (0:24:58) Codey: Don’t look at it. (0:25:00) Kev: Wait, don’t why not? (0:25:02) Codey: Unless you are a housewife that is bored. (0:25:07) Codey: He posted a risque. (0:25:08) Kev: Oh, dear. (0:25:10) Codey: No, it wasn’t. (0:25:11) Kev: Oh, my gosh, I see. (0:25:11) Codey: It wasn’t risque. (0:25:13) Codey: It was just Robert Irwin in underwear. (0:25:14) Kev: No, I get it. (0:25:17) Kev: I i didn’t yeah (0:25:18) Kev: oh that’s uh yeah that is underwear those are boxers yep I see it (0:25:18) Codey: But Instagram was going insane. (0:25:23) Kev: with him holding it looks like a gila monster or something (0:25:26) Codey: Yeah, he was also holding a snake at some point. (0:25:27) Kev: yeah oh wait oh I see oh that snake that’s a different one yeah (0:25:29) Codey: Yeah. (0:25:31) Codey: He could have a whole calendar. (0:25:34) Codey: And I’m sure he’d make millions. (0:25:36) Codey: Yeah, no, I think that the idea of a survival game in Australia– (0:25:37) Kev: oh that’s good (0:25:43) Codey: why hasn’t that happened yet? (0:25:46) Codey: Why has it taken this long? (0:25:46) Kev: Yeah, you’re right. And it’s an Australian pleasant places. This is just my daily lives. (0:25:58) Kev: This is just the Sims. This isn’t anything special. No, but but yeah, I mean, all you know, (0:26:05) Kev: all joking, the Australia danger death zone aside, you know, it has the wide variety of unique floor (0:26:13) Kev: and faunus that makes for I think doesn’t make great for (0:26:16) Kev: great setting for this kind of game although you know fighting the boss shark is kind of ridiculous (0:26:22) Kev: but in a fun way um so yeah that’s dinkum 1.0 um it is again much like op-ed you can play co-op (0:26:30) Kev: with people um that is out right now um 1.0 um there’s lots of stuff go check out the link you (0:26:39) Kev: know where to find all that stuff um okay yep there you go let’s talk (0:26:43) Codey: And that’s what released last month. (0:26:47) Kev: stuff that didn’t release next month but supposedly releasing in the future supposedly (0:26:54) Kev: all right harvest moon double pack of ports um we have sky tree uh lost valley and sky tree village (0:27:04) Kev: being released uh as a double pack um for uh let’s see I don’t know switch is that what it is okay (0:27:13) Kev: Yeah that makes sense. It is getting a… (0:27:17) Kev: A physical release through the Natsume store, that’s where it is, yeah through the Natsume store. (0:27:24) Kev: You can get a physical copy including an acrylic standee which I don’t think looks particularly noteworthy or nice, but there you go if you’re interested. (0:27:30) Codey: Yeah, but I mean maybe if this was a game that someone really liked then that’s that’s cool for them (0:27:35) Kev: Yeah, yeah, but do those people exist though? (0:27:37) Codey: But I (0:27:39) Codey: Don’t know not to me just trying to milk the cash cow as long as they can (0:27:45) Kev: the literal cash cow. (0:27:46) Codey: Yeah (0:27:48) Kev: I will give them this. They’ve done a dang good job because it’s been a hot minute since they just did the localization, (0:27:56) Kev: but they’re still running. It is so good. I respect that hustle. (0:28:04) Codey: So yeah, 3rd of July if folks are interested in that. (0:28:06) Kev: Oh, thank you. Yeah, the date. I guess that matters. (0:28:12) Kev: Um, next up, all right, here is a game I do care about though. (0:28:15) Kev: Actually, let’s talk about. (0:28:16) Kev: Cattle Country, um, your 2d pixel art Western frontier, um, Stardew ish like where you’re, you’re doing all the things, um, and you can little bit Oregon Trail, a little bit Stardew Valley, I guess, I don’t know, but, uh, yeah, um, it comes, it is releasing on May 27th, um, which that is very close. (0:28:35) Codey: Mm-hmm. (0:28:41) Codey: Ooh, that’s close. (0:28:44) Kev: Um, that’s a bad time. That’s always a busy time. (0:28:46) Kev: Um, yeah, that’s, that’s a couple of weeks. Um, I’m looking forward to it and I’ll be (0:28:47) Codey: Okay. (0:28:48) Codey: Did you, like, back it or anything, or you haven’t bought it yet? (0:28:53) Codey: Okay. (0:29:14) Kev: playing it. Keep your eyes. (0:29:16) Kev: Ears peeled, I guess. I don’t know how to peel ears. But here we are. Let’s see. Now we have (0:29:29) Kev: the next game, 2Pixellia, that is out now. This is, yeah, May 1st, it just released. This one is a (0:29:32) Codey: Mm hmm. Yep. (0:29:43) Kev: again pixel art to the kind of city (0:29:47) Codey: Mm-hmm. Yeah, this is the one that doesn’t know what it wants to be, (0:29:47) Kev: simulator I guess um yeah (0:29:53) Codey: and there’s so many different things in it. I believe Al backed this one, so he has it. (0:29:59) Kev: that’s a safe bet (0:29:59) Codey: We’ll probably hear about it, but watching the trailer and stuff, I was just so like… (0:30:06) Kev: it’s it’s a lot if it feels like honestly it feels like taking control of (0:30:12) Kev: one of the sims instead of watching them do (0:30:14) Codey: Yeah. (0:30:15) Kev: everything like just playing them you know (0:30:17) Kev: um so i’ll check it out if you want if you want to do (0:30:22) Kev: crimes you can do crimes so i’ll give them that (0:30:24) Codey: You can do crimes in this game can’t break up marriages though. (0:30:25) Kev: or you can do crimes i’ll give them that (0:30:29) Kev: all right not not yet again that is two pixely and that is (0:30:35) Kev: out now on steam at least I don’t know where else but uh (0:30:38) Kev: there you go all right let’s get into games that already (0:30:43) Kev: did come out but now have more stuff coming out (0:30:47) Kev: okay okay this first one blew my mind because I thought this game was (0:30:52) Kev: dead based like not dead but you know done I can (0:30:54) Codey: Dude, no, this game is not dead. (0:30:57) Codey: This game slaps. (0:30:59) Codey: I wanna play it so bad. (0:30:59) Kev: well I mean yeah I mean i’m not saying that (0:31:02) Kev: dead like people aren’t playing it just I didn’t think they were we’re gonna get (0:31:06) Kev: an update but I guess if there is one franchise in (0:31:09) Kev: the world that can do whatever they want it would be (0:31:13) Kev: san rio and hello kitty because hello kitty island adventure has (0:31:17) Kev: not one but two updates well one of them I think (0:31:21) Kev: came out already um the other one was announced (0:31:25) Kev: Um, so there is the first one is (0:31:29) Kev: the, um, friends furniture and frozen peaks expansion. Um, (0:31:34) Kev: I think that’s out already. Um, there is, uh, (0:31:39) Kev: expansions to believe new areas, Blizzard peak and snow village. (0:31:44) Kev: There are new types of weather flowers, all sorts of new story and cosmetics, (0:31:49) Kev: um, and more improvements and refinements to the friendship system, (0:31:53) Kev: which were needed. Um, just, yeah, just a lot of stuff. (0:31:57) Kev: There’s a lot of stuff again. (0:31:59) Kev: The ice area that’s the big one and improvements to the, you know, UIs and experiences. Nice little (0:32:07) Kev: quality of life touches here and there. All sorts of new furniture and cosmetics. (0:32:15) Kev: There’s some new quests and storylines, all that stuff. Yeah, yeah. (0:32:20) Codey: Sorry, I I was trying to see what it was available on and it is on Apple Arcade (0:32:27) Kev: I mean, it’s on the Steam page. (0:32:29) Kev: So I assume it’s, yeah. (0:32:31) Codey: Yeah, so it’s on Steam but it’s also on Apple Arcade well because on Steam it’s only or sorry it’s on Steam it’s only (0:32:40) Codey: Windows but then it so to play it on Mac you have to play it in Apple Arcade (0:32:43) Kev: Mm-hmm (0:32:47) Codey: I’m wondering, so it’s also on switch. (0:32:53) Kev: on everything, I think, right? I’m just gonna… Look, I’m assuming their deep pockets have (0:32:59) Kev: enabled it to just be out on everything. (0:33:02) Codey: No, they don’t have a Xbox. I was looking for that because I have game pass. Yeah, looking (0:33:08) Kev: - Oh, well, everything they’re on. (0:33:09) Codey: at this, looking at this like update, the biggest thing that I lost my mind about was (0:33:14) Codey: Aggretsuko. So they added, it was, I think it was just in the friends furniture and frozen (0:33:16) Kev: Yeah (0:33:23) Codey: peaks. One, they added Retsuko. So from the TV show, Aggretsuko, they added. (0:33:24) Kev: Yeah (0:33:30) Kev: Well retzeko was in already (0:33:32) Codey: But her friends and coworkers are now. (0:33:35) Kev: That’s correct (0:33:36) Kev: But she was the weird one because nobody knew her because she was from a different part of the scenario verse (0:33:41) Kev: But now yes the supporting ritz a gretzeko cast is here (0:33:46) Kev: I haven’t watched the anime so I can’t get hyped, but I’m kind of hyped because it’s cool (0:33:50) Kev: I like the anime without watching it. It’s a good concept (0:33:54) Codey: I friggin love that anime. Well, because I love metal, so it like tickles that fancy. (0:34:01) Codey: Yeah, I don’t know. Yeah, the other update is called the month of meh. (0:34:02) Kev: Yep, I’ll do it (0:34:07) Kev: But (0:34:10) Kev: Okay, I just got to say when I saw this I thought that was I’ll know it I was like wow I’ll jam (0:34:15) Codey: See, I thought I was saying like, eh, this game sucks. And then I looked, I was like, oh no, it’s literally. (0:34:17) Kev: Don’t like this update, huh? (0:34:24) Codey: Called the month of meh. And there’s Gudetama as. Oh, okay. (0:34:25) Kev: The next up is called month of man (0:34:32) Kev: As good atomic it’s a good atomic festival of sorts right cuz (0:34:38) Kev: yeah, because the Hello Kitty Island adventure does like kind of like Animal Crossing where you know special events or (0:34:45) Kev: Theirs are usually like maybe a week or two. Whatever like oh, here’s the (0:34:50) Kev: Chinese New Year’s the Christmas is the Halloween whatever right they do their monthly your seasonal events and this appears to be one of those (0:34:54) Codey: This, this character that’s dressed as an egg yolk though, a nisetami-san. (0:34:57) Kev: from a night what I saw (0:35:02) Kev: I (0:35:04) Kev: Don’t know who that is. Is that a Sandra Hill character because okay, cuz everyone knows Goude Thomas the lazy egg with the butt (0:35:05) Codey: I don’t like it. (0:35:07) Codey: I don’t like it. (0:35:11) Kev: All right, he’s it’s a little the little guy, right? It’s like, you know standard chicken egg and the little guy is that (0:35:18) Kev: But this guy (0:35:20) Kev: He said Thomas son. No, that is a full grown (0:35:24) Codey: That’s a person, and it’s in a weird costume, and I don’t like that. (0:35:24) Kev: sized human man in a (0:35:28) Kev: You know like yellow (0:35:32) Kev: Jump not jumps, you know, it’s those weird Halloween costumes (0:35:34) Codey: body suits. Yeah. That’s a choice. (0:35:36) Kev: Yeah, the bodysuits that are the single colors and he’s dressed up like Gudetama, but he’s got a very (0:35:43) Kev: Bland face. I don’t I don’t know how to (0:35:46) Codey: the face is honestly the worst part like if if it was like a normal face I would be like (0:35:51) Codey: oh it’s a person but the face I’m just like what is happening um (0:35:55) Kev: Yeah, no, I’m looking this up, this is not new, this is a Sanrio. (0:35:59) Codey: I’m sure it’s a character I just it’s character I didn’t know and I would love to go back to (0:36:01) Kev: Yup. (0:36:03) Kev: It’s a man dressed as Gudetama, so there you go. (0:36:06) Codey: I’d love to go back to five minutes ago when I did not know what six things existed (0:36:12) Kev: He’s got very thick eyebrows. (0:36:14) Kev: That might be part of the problem. (0:36:16) Codey: I don’t know I don’t know (0:36:19) Kev: Well, there you go. (0:36:19) Kev: There’s the month of May. (0:36:21) Kev: It’s through the month of May. (0:36:24) Kev: You get it? (0:36:25) Kev: You get it? (0:36:25) Kev: I don’t know if that was their intention, (0:36:27) Codey: it’s gonna be meth (0:36:27) Kev: but whatever worked out. (0:36:29) Kev: You know what’s not– well, actually, I’m (0:36:31) Kev: sure the update itself is not meh. (0:36:33) Kev: But you know what else is not meh? (0:36:35) Kev: These new games actually– no, they’re all right. (0:36:36) Codey: Oh, no, we got another update. (0:36:39) Kev: Oh, I did skip one. (0:36:41) Kev: Whoopsie. (0:36:41) Codey: There was another update, but it’s only the 0.5 update, (0:36:42) Kev: Oh yeah. (0:36:45) Codey: y’all, so don’t worry. (0:36:46) Codey: It’s not really– (0:36:50) Codey: it’s not a real update. (0:36:51) Codey: No, it’s fine. (0:36:53) Codey: The game Far Lands has a 0.5 update (0:36:56) Codey: that adds a bunch of stuff, including (0:36:58) Codey: hostile mobs in the mines, a new mine (0:37:01) Codey: that you can go to, 10 new social events, Steam Deck (0:37:05) Codey: support, uh, furniture. (0:37:06) Codey: I didn’t, I haven’t really looked at this game before, you kind of just look like a zombie. (0:37:14) Kev: yeah you look like a weird uh yeah a lot of the other characters because you crash line on like (0:37:20) Kev: an alien planet and they’re all alien zombie-ish in appearance I i I agree um but uh yeah okay it (0:37:29) Kev: feels a little space tardewy but it has a little more flavor it has a little more sauce as the kids (0:37:34) Kev: call it um like uh it I that’s what I hear look I still tutor kids okay you know okay (0:37:36) Codey: Is that what the kids are calling it? (0:37:44) Kev: tangent hey shocker me cody you’re going to tangent but okay look I know every generation (0:37:46) Codey: Tangent, boom, boom, boom, boom. (0:37:52) Kev: our generation and everyone has their slang and lingo and and bads and whatever right (0:37:58) Codey: Mm-hmm. (0:37:59) Kev: but that I think i’ve been thinking about this because as I tutor right like I see the the next (0:38:05) Kev: generation of kids or it’s also i’m like a couple cycles in at this point really um because i’ve (0:38:10) Kev: tutored for so dang long it’s it’s gotten horrendous the the skipper (0:38:15) Kev: generation as i’m gonna call it because I think that’s the emblematic uh term and and (0:38:16) Codey: Yeah, it’s it’s the brain rot generation. So they, they say like the they they’re that (0:38:22) Kev: yes the brain rot generation there too (0:38:26) Codey: what they’re going to watch is just brain rot and just cringe. And like, yeah, they (0:38:28) Kev: yep yep all that like I i don’t either it’s not and like (0:38:31) Codey: I don’t get it. It’s not entertaining to me. (0:38:39) Kev: the issue isn’t even the stuff itself per se I mean a little bit but you know like that’s i (0:38:43) Kev: I think also just kind of. (0:38:44) Kev: The nature of the generations, like our parents didn’t get our stuff either or whatever, right? (0:38:49) Kev: Um, but it just feels like so much more prevalent and ingrained and, and virulent than, uh, than other generations. (0:39:01) Codey: Verilent. It’s a virus, the brain rot virus. (0:39:01) Kev: And I’ve been thinking about it really. (0:39:03) Kev: Yeah, it really is. (0:39:06) Kev: And, and it’s, I’ve been thinking like, how should I put this right? (0:39:11) Kev: So when you and I grew up, let’s say, right in our middle school. (0:39:14) Kev: I’m a school, whatever, right? (0:39:15) Kev: Like there are the fads and there’s the playground speak and it would spread or whatever, right? (0:39:19) Codey: Mm-hmm. (0:39:20) Kev: But it was still kind of contained to our local area, right? (0:39:22) Codey: Mm-hmm. (0:39:24) Kev: Because we didn’t have the internet, but I think about it. (0:39:27) Kev: Like, I think that’s the issue, right? (0:39:29) Kev: All these kids have their phones and the same app. (0:39:32) Kev: So like, it’s all so homogenized. (0:39:35) Kev: Like, I think that’s just why like every single kid knows it and watches the same thing. (0:39:42) Kev: Like, it’s, it’s insane. (0:39:44) Kev: Oh, my gosh. Oh. (0:39:46) Codey: Yeah, that’s a whole other story about like whether or not that stuff is helping or harming (0:39:54) Codey: the socialization of the next generation. (0:39:58) Kev: Mm-hmm. That’s a, that is a good one. (0:39:59) Codey: That’s something I’ve been thinking about a lot lately as like, more discussions about (0:40:01) Kev: Yeah. Mm-hmm. (0:40:04) Codey: autism come out and stuff and like having been, you know, diagnosed as a autistic adult (0:40:10) Codey: at this point. (0:40:12) Codey: Um, yeah, but. (0:40:14) Kev: Yeah, it’s (0:40:16) Kev: And it’s tough because we you know, we’re not gonna have all the data until later right like it’s gonna be tougher kind of (0:40:23) Kev: Speculating but um, it’s it’s it’s what to see (0:40:27) Kev: and but I you know (0:40:28) Codey: Well, that’s Far Lands. (0:40:30) Kev: As about yeah, there you go, there’s your point five update for for this it’s out now (0:40:32) Codey: Pull us back. (0:40:33) Codey: Pull us back. (0:40:37) Codey: It’s out now. (0:40:38) Codey: If you want a hostile mob, if you’ve (0:40:40) Codey: been waiting for steam support, they’re there. (0:40:44) Kev: Achievements, they’re in there, too. (0:40:47) Codey: They are there. (0:40:49) Kev: All right, let’s talk– (0:40:51) Kev: those are the updates. (0:40:53) Kev: Let’s talk about some new game announcements. (0:40:54) Codey: Mm-hmm. (0:40:55) Kev: Let’s talk about “Scibbity Island.” (0:40:57) Codey: No! (0:40:59) Codey: We were leaving it. (0:41:00) Codey: We were leaving it. (0:41:02) Kev: No, it’s here to stay. (0:41:03) Codey: No, Sunseed, Sunseed Island. (0:41:05) Kev: OK. (0:41:07) Kev: OK, yeah, “Sunseed Island.” (0:41:10) Kev: OK, first of all, this is insanely cute art style. (0:41:14) Kev: Whoever their art team is, they’re (0:41:18) Kev: knocking out of the park. (0:41:20) Kev: The actual game itself, you’re playing (0:41:22) Kev: as a little cat with a little Pikmin-style flower (0:41:25) Kev: star on your head. (0:41:27) Kev: And you’re rebuilding your town. (0:41:29) Kev: You’re farming. (0:41:30) Kev: You’re fishing. (0:41:31) Kev: You’re exploring islands. (0:41:35) Kev: A lot of the hallmarks of the cottage core verse, (0:41:37) Kev: what can I say? (0:41:38) Kev: There’s different– it looks like– (0:41:40) Kev: I think it’s kind of like Animal Crossing, (0:41:42) Kev: where you can sail to different islands, (0:41:43) Kev: and there might be different environments. (0:41:44) Kev: It’s coming on April 24th on the eShop, actually. (0:41:51) Kev: Wait, what? (0:41:53) Kev: Wait, April 24th? (0:41:54) Codey: Oh, so it’s already out. (0:41:55) Kev: Is that already out? (0:41:56) Codey: It’s already out. (0:41:57) Kev: Oh, yeah, it is sale ends. (0:42:00) Codey: Yeah, so it’s already out. (0:42:02) Codey: Yeah, the little things on top– they all have little Pikmin. (0:42:06) Codey: It’s not just your character. (0:42:07) Codey: All the characters look like they (0:42:08) Kev: Yeah. (0:42:09) Codey: have little Pikmin-y things. (0:42:10) Kev: Mm-hmm. (0:42:11) Codey: Very cute. (0:42:12) Kev: Yeah. (0:42:14) Codey: And so their blurb for it is, welcome to Sunseed Island, (0:42:18) Codey: the perfect escape for cozy gamers who love (0:42:20) Codey: to farm, explore, and collect. (0:42:24) Codey: Rebuild a lost village, befriend magical creatures, grow crops, (0:42:27) Codey: fish and tranquil waters, and upgrade your tools, (0:42:29) Codey: discover the secrets of a mystical tree, (0:42:31) Codey: and reconnect the scattered islands. (0:42:35) Codey: I could read that for so many different of these games. (0:42:38) Kev: Okay. (0:42:41) Codey: Yeah, it looks cute. (0:42:43) Codey: Looks like all the other ones, unless you’d (0:42:46) Codey: want to play as a cat, then here you go. (0:42:47) Kev: Yep. Yeah. (0:42:49) Codey: So the price, though, was confusing. (0:42:54) Codey: So on May– (0:42:55) Kev: I mean it’s um well right it’s on sale on the us e-shop for three dollars that’s 72% off i’m (0:42:58) Codey: yeah, go for it. (0:43:04) Kev: too lazy to do the math that’s maybe what like 10 bucks maybe um yeah it’s something like that (0:43:10) Kev: usd uh I can’t see this because I don’t have the uke shop but (0:43:14) Kev: per one of al’s notes apparently it is 100 off on the uke shop so (0:43:18) Codey: I feel like they probably fixed that because the note out so Al said that he got it for free (0:43:24) Codey: because it was it was literally free on the eShop so he got that um and then the note that he left (0:43:25) Kev: yep (0:43:28) Kev: I i don’t blame him (0:43:33) Codey: for the US eShop was that it was 90% off but then I looked and it’s definitely 72% off so (0:43:36) Kev: So, sounds like some numbers were crossed and buttons were pushed incorrectly. (0:43:47) Codey: Y’all should– (0:43:49) Codey: Y’all should check this. (0:43:50) Kev: It’s on sale somewhere. (0:43:52) Kev: You can go get it for cheap right now, we’ll tell you that much. (0:43:54) Codey: Yeah, check this regularly. (0:43:57) Codey: Maybe they just are flipping a dice or something, (0:44:00) Codey: like throwing a dart at a board. (0:44:01) Kev: Maybe they put the wrong sign on. (0:44:07) Kev: It happened to me at the store, not just once multiple times, but they had something on sale and it wasn’t on sale, it made me sad because it was like boxes of cereal, it was like buy two, get three free, it was like wow that’s a great deal, and then I went to the register like no that’s wrong, and I was like oh, I guess I won’t get my cereal. (0:44:24) Codey: So check it out if you like the idea of a stardew where you’re a Pikmin cat. (0:44:31) Kev: Yeah, but you know it kills me that (0:44:33) Codey: The next s s island. (0:44:37) Kev: Okay, okay, you know you want to get into all right. Let’s get into that because what I was gonna say for both of these games (0:44:43) Kev: We went from was it sunseed Island to star sand Island, right? And (0:44:46) Codey: - Mm-hmm. (0:44:50) Kev: Look, you know (0:44:51) Kev: It’s a tough market. I get it right like you got to stand out (0:44:56) Kev: It’s tough and you know what there’s redeeming qualities here both of these (0:44:59) Kev: I’m not- I’m not gonna say these are off. (0:45:01) Kev: That’s awful, but I’m- I’m already loading the gun and ready to drag you out behind the barn for using those names because no, no. (0:45:06) Codey: S– yeah, for a lack of originality, for sure. (0:45:13) Kev: You went with the Noun Noun island slash valley, which is- is- (0:45:17) Codey: S– like, nouns that start with S. (0:45:20) Kev: And then you start with S, both of you, like, oh my gosh, what is- you know what, maybe- maybe the Skibitty kids aren’t so bad. (0:45:23) Codey: Yeah. (0:45:31) Codey: This one does look different, though. (0:45:33) Codey: So the blurb of it, again, is basic. (0:45:36) Codey: So it’s leave the hustle of the city behind (0:45:38) Codey: and embrace life on Starsand Island. (0
Just dropped a fresh episode of the Angular Master Podcast – and it's a must-listen for every frontend developer thinking about the future.This time I'm joined by the one and only Nir Kaufman — Google Developer Expert, international speaker, tech lead at Tikal, and the brilliant mind behind our newest initiative:
In this special episode of Create Like the Greats, Ross Simmonds sits down with SEO expert Garrett Sussman from iPullRank for a dynamic conversation recorded ahead of SEO Week. They unpack the current and future state of search, the increasing relevance of Reddit in SEO strategies, and how marketers must evolve with AI-driven search behaviors. Ross also previews his upcoming SEO Week talk on Reddit, AI, and user-generated content—and shares a tactical gem that every brand should act on today. Key Takeaways and Insights: SEO Week 2026 Preview SEO Week will return to NYC in 2026 featuring top minds in SEO, content marketing, social, AI, and more. The State of Search: Chaotic but Beautiful Search has dramatically evolved; it's no longer just about Google. Fragmented discovery across TikTok, Reddit, YouTube, ChatGPT, Perplexity. Emphasis on first principles: high-quality content, user intent, and storytelling. Marketers must now behave more like... well, marketers, not just SEOs. Creating Content That Scales SEO in 2024+ requires multimedia optimization: Text + video (especially YouTube) for better SERP coverage. Understanding where your audience starts their search journey. Reddit threads and YouTube videos are now ranking in Google results, diversifying content visibility. Reddit + AI + UGC: A Preview of Ross's Talk at SEO Week Reddit is emerging as a powerful and underused SEO platform. Ross's team is scraping and analyzing subreddits, mapping them to keywords and AI outputs. Google & OpenAI have data licensing deals with Reddit—proof of its strategic value. Reddit content is influencing AI-generated responses… and SEO strategies must adapt. Tactical Insight: Own Your Subreddit Brands should immediately create and claim their subreddit—even if not actively using it. Prevent brand impersonation or NSFW sabotage by securing the namespace. “It's as basic as securing your domain or Twitter handle,”
Andrada Mining het aangekondig dat hy in 'n vennootskap getree het met die Chileense eksplorasie- en mynmaatskappy SQM, om die litiumkant van sy bedrywighede uit te brei. SQM is een van die grootste litium-myners ter wêreld. Andrada se uitvoerende hoof, Anthony Viljoen, sê hulle gaan 'n intensiewe veldtog op Litiumrif begin en hopelik hulle tweede projek by die rif in produksie bring. Viljoen het met Mining Weekly gepraat oor hul planne by die Uis-tinmyn asook tantalum-uitvoere.
During the March 2025 Lunch and Learn Series, Dr. Hei-Chi Chan, UIS associate professor of mathematical sciences and philosophy and co-founder of the university's AI Campus Learning Community, gave a presentation on the evolution of AI, the university's AI Campus Learning Community, how students can make the best use of AI, and the future of AI in our society. UIS is offering a camp on AI this summer.
Live from Building Blocks at Jaffa Port, Yitzy Hammer chats with Asaf Naim, co-founder of Kirobo, a trailblazer in Web3 since 2018. Asaf shares his journey from a taxation accountant to a self-taught programmer, sparked by a failed Bitcoin buy in 2011 and a client's lost crypto key in 2018. Kirobo pioneered account abstraction with "smart transactions," introducing the first undo button for crypto transfers. Now, they've launched Intentable (intentable.io) - the "Wix" of blockchain applications, a developer tool for rapid smart transaction creation, and are gearing up for Dappable (dappable.ai), an AI-driven platform to build DApps, tokens, and UIs in minutes. Asaf envisions Dappable as the next big Web3 driver, blending NFT and token innovation. Follow @Kirobo on Twitter for updates and join the revolution!Asaf on LinkedIn: https://www.linkedin.com/in/asafnaim/Check out Intentable: https://app.intentable.io/trading/top_pairs?utm_source=linkedin&utm_campaign=Profile
Live from the Lore Bookshop at Shoptalk Spring, Phillip and Brian sit down with friends of the podcast and creative heavyweights Nick Susi and Mai Nguyen of dotdotdash to unpack their latest work with Nike—the groundbreaking AIR Imagination project. Nick and Mai pull back the curtain on what it takes to build immersive, AI-powered brand experiences that actually feel like the brand—and why surprise and delight matter more than ever in a world obsessed with optimization. Plus, hear how they're redefining user experience through experiments like Chromaverse and their take on AI as a “serendipity engine.”This Is What AI Should Feel Like (Chicken Shoes)Key takeaways:Nike's AIR Imagination is a Brand Milestone: AIR Imagination isn't just a product—it's a community design platform that invites fans to co-create with Nike DNA. Powered by AI, it lowers the barrier to creativity while still preserving the integrity of the brand.AI-Driven Remixing Is Reshaping Brand Strategy: This project flips the traditional brand-to-consumer model on its head. Now, it's consumer-to-consumer remixing, with Nike acting as the shared language. It's not just collaboration—it's cultural co-creation at scale.Dot Dot Dash Built AI That Feels Like Nike: Behind the scenes, Dot Dot Dash developed prompt-enhancing tech to ensure every user-generated design still looks and feels like a Nike product—proof that AI doesn't have to compromise brand authenticity.AI as a “Serendipity Engine” Is the Future of Commerce: Instead of treating AI like a tool for hyper-optimization, Dot Dot Dash sees its true power in enabling surprise, delight, and discovery. That ethos runs through everything from Nike to their own experimental tools like Chromaverse.Brands Need to Design with Culture, Not Just for It: As Nick's essay “Culture as a Client” explores, successful brands don't just extract from culture—they build multidirectional ecosystems that create mutual value. Nike's project is a masterclass in getting that balance right.“AIR Imagination is Nike saying, ‘Here's the vault. Remix what we've done—and what each other has done.'” – Nick Susi“Even with all that detail, it still came out a Nike shoe. That's the connection—that's the relationship.” – Mai Nguyen“What AI is really good at is chance, randomness, serendipity... That's what creates emotional experience.” – Nick Susi“We're not invalids. Don't spoon-feed us everything. Help me—but don't take it all away.” – Mai Nguyen“Chat-based ecommerce UIs? Frustrating. Instead, look to gaming or luxury retail for better design references.” – Nick SusiAssociated Links:Order LORE by Future CommerceCheck out Future Commerce on YouTubeCheck out Future Commerce+ for exclusive content and save on merch and printSubscribe to Insiders and The Senses to read more about what we are witnessing in the commerce worldListen to our other episodes of Future CommerceHave any questions or comments about the show? Let us know on futurecommerce.com, or reach out to us on Twitter, Facebook, Instagram, or LinkedIn. We love hearing from our listeners!
If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What
Filip Kozera sees parallels between Excel's democratization of data analytics and Wordware's mission to put AI development in the hands of knowledge workers. Drawing inspiration from Excel's 750 million users (compared to 30 million software developers), Wordware is creating tools that balance the rigid structure of programming with the fuzziness of natural language. Filip explains why effective AI development requires working across multiple abstraction layers—from high-level concepts to detailed implementation—while preserving human creative control. He shares his vision for “word artisans” who will use AI to amplify their creative impact. Hosted by Sonya Huang, Sequoia Capital Mentioned in this episode: Lovable: Generative AI app that builds UIs and web apps Her: 2013 Spike Jonze film that Filip uses as an example of how voice will not be the best modality to express knowledge work. Descript: AI video editing app that Filip uses a lot. Granola: AI notetaking app Filip uses every day.. Gemini 2.0 Pro: Google's newest long context model that can handle 6000 page pdfs. Limitless pendant: Wearable device for collecting personal conversational context to drive AI experiences that Filip can't wait for to ship. DeepLearning.AI: Andrew Ng's amazing resource for learning about AI 3Blue1Brown: Grant Sanderson's incredible channel on YouTube that explains math and AI visually.
Talk Python To Me - Python conversations for passionate developers
In this episode, we welcome back Will McGugan, the creator of the wildly popular Rich library and founder of Textualize. We'll dive into Will's latest article on "Algorithms for High Performance Terminal Apps" and explore how he's quietly revolutionizing what's possible in the terminal, from smooth animations and dynamic widgets to full-on TUI (or should we say GUI?) frameworks. Whether you're looking to supercharge your command-line tools or just curious how Python can push the limits of text-based UIs, you'll love hearing how Will's taking a modern, web-inspired approach to old-school terminals. Episode sponsors Posit Python in Production Talk Python Courses Links from the show Algorithms for high performance terminal apps post: textual.textualize.io Textual Demo: github.com Textual: textualize.io Zero ver: 0ver.org memray: github.com Posting app: posting.sh Bulma CSS framewokr: bulma.io JP Term: davidbrochart.github.io Rich: github.com btop: github.com starship: starship.rs Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
In today's episode, we're diving into proactive observability and testing with our special guest, Anam Hira, cofounder of Reveal.ai. Anam, who also has experience working at Uber AI, shares an intriguing journey where he developed "Dragon Crawl," an innovative project aimed at tackling challenges Uber faced with its end-to-end testing across multiple cities. We explore how Dragon Crawl utilized LLMs to enhance testing reliability, making tests less flaky across varied UIs. Anam's journey didn't stop there. He co-founded Reveal, a platform that takes testing and observability to a new level by connecting end-to-end tests with telemetry data. This modern approach, termed proactive observability, allows for detecting bugs before they hit production, saving companies significant time and cost. Join us as we explore the principles of proactive observability, how Reveal leverages telemetry for seamless integration, and its impact on testing efficiency. Whether you're a startup or an enterprise, if you're keen to ship faster without sacrificing quality, this is an episode you won't want to miss!
In 1998 Bioware changed the gaming landscape with Baldur's Gate, a Dungeons and Dragons inspired adventure. Taking cues from second edition D&D and the many stories written about it, Bioware wrote a low level adventure that jammed everything they could find inside it. They attempted to copy mage spells directly from the manual, and even added in famous characters like Drizzt. It was exactly what the fans had been waiting for. Not only were they able to see everything come to life, the game used the radical real time with pause system, allowing combat to flow far more freely than other turn- based games before it. Fights were fast and frequently deadly, adding a thrill to RPG combat, even if it came at the cost of precision and control.But CRPGs are now a dime a dozen. Not only are there later games in the series, Baldur's Gate now has to compete against Pathfinder, Warhammer and Divinity, to name a few. With all the advancements in tech, in UIs, in skill systems, can the original Baldur's Gate really compete? Has it stood the test of time, or has it been made obsolete by the wave of modern RPGS that came in its wake?On this episode, we discuss:ExplorationMuch of the early-mid game of Baldur's Gate 1 is spent exploring the environment around the towns and keeps that are dotted around the Sword's Coast. Is this exploration enjoyable to engage with intrinsically, and does it provide enough extrinsic rewards to motivate you to continue?CombatBaldur's Gate's combat is scrappy, disorganised and messy. There is a high level of vulnerability on both sides, spells have unclear areas of effect, and high rolls can produce instant-death in many circumstances. Does this combat system lead to dynamic and novel encounters that allow for tactical depth, or does it just encourage reloading until you get the outcome that you want?Character CustomisationBaldur's Gate is a very complex game under the hood, but those calculations, and level up decisions are mostly hidden from the player. Does the game give sufficient options to the player to customise their character with skill choices and gear, or is it all predetermined the moment you first select your character? We answer these questions and many more on the 128th episode of the Retro Spectives Podcast? Intro Music: KieLoBot - Tanzen KOutro Music: Rockit Maxx - One point to anotherBaldurs Gate OST - Michael Hoenig Are you meant to eventually multiclass everyone to a mage in DND? Is Sanctuary really as overpowered as it seems? When will you play Baldur's Gate 2, you pair of sniveling cowards? Come let us know what you think on our community discord server!You can support the show monetarily on our Buy me a Coffee Page!
A challenge I frequently hear about from subscribers to my insights mailing list is how to design B2B data products for multiple user types with differing needs. From dashboards to custom apps and commercial analytics / AI products, data product teams often struggle to create a single solution that meets the diverse needs of technical and business users in B2B settings. If you're encountering this issue, you're not alone! In this episode, I share my advice for tackling this challenge including the gift of saying "no.” What are the patterns you should be looking out for in your customer research? How can you choose what to focus on with limited resources? What are the design choices you should avoid when trying to build these products? I'm hoping by the end of this episode, you'll have some strategies to help reduce the size of this challenge—particularly if you lack a dedicated UX team to help you sort through your various user/stakeholder demands. Highlights/ Skip to The importance of proper user research and clustering “jobs to be done” around business importance vs. task frequency—ignoring the rest until your solution can show measurable value (4:29) What “level” of skill to design for, and why “as simple as possible” isn't what I generally recommend (13:44) When it may be advantageous to use role or feature-based permissions to hide/show/change certain aspects, UI elements, or features (19:50) Leveraging AI and LLMs in-product to allow learning about the user and progressive disclosure and customization of UIs (26:44) Leveraging the “old” solution of rapid prototyping—which is now faster than ever with AI, and can accelerate learning (capturing user feedback) (31:14) 5 things I do not recommend doing when trying to satisfy multiple user types in your b2b AI or analytics product (34:14) Quotes from Today's Episode If you're not talking to your users and stakeholders sufficiently, you're going to have a really tough time building a successful data product for one user – let alone for multiple personas. Listen for repeating patterns in what your users are trying to achieve (tasks they are doing). Focus on the jobs and tasks they do most frequently or the ones that bring the most value to their business. Forget about the rest until you've proven that your solution delivers real value for those core needs. It's more about understanding the problems and needs, not just the solutions. The solutions tend to be easier to design when the problem space is well understood. Users often suggest solutions, but it's our job to focus on the core problem we're trying to solve; simply entering in any inbound requests verbatim into JIRA and then “eating away” at the list is not usually a reliable strategy. (5:52) I generally recommend not going for “easy as possible” at the cost of shallow value. Instead, you're going to want to design for some “mid-level” ability, understanding that this may make early user experiences with the product more difficult. Why? Oversimplification can mislead because data is complex, problems are multivariate, and data isn't always ideal. There are also “n” number of “not-first” impressions users will have with your product. This also means there is only one “first impression” they have. As such, the idea conceptually is to design an amazing experience for the “n” experiences, but not to the point that users never realize value and give up on the product. While I'd prefer no friction, technical products sometimes will have to have a little friction up front however, don't use this as an excuse for poor design. This is hard to get right, even when you have design resources, and it's why UX design matters as thinking this through ends up determining, in part, whether users obtain the promise of value you made to them. (14:21) As an alternative to rigid role and feature-based permissions in B2B data products, you might consider leveraging AI and / or LLMs in your UI as a means of simplifying and customizing the UI to particular users. This approach allows users to potentially interrogate the product about the UI, customize the UI, and even learn over time about the user's questions (jobs to be done) such that becomes organically customized over time to their needs. This is in contrast to the rigid buckets that role and permission-based customization present. However, as discussed in my previous episode (164 - “The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge”) designing effective AI features and capabilities can also make things worse due to the probabilistic nature of the responses GenAI produces. As such, this approach may benefit from a UX designer or researcher familiar with designing data products. Understanding what “quality” means to the user, and how to measure it, is especially critical if you're going to leverage AI and LLMs to make the product UX better. (20:13) The old solution of rapid prototyping is even more valuable now—because it's possible to prototype even faster. However, prototyping is not just about learning if your solution is on track. Whether you use AI or pencil and paper, prototyping early in the product development process should be framed as a “prop to get users talking.” In other words, it is a prop to facilitate problem and need clarity—not solution clarity. Its purpose is to spark conversation and determine if you're solving the right problem. As you iterate, your need to continually validate the problem should shrink, which will present itself in the form of consistent feedback you hear from end users. This is the point where you know you can focus on the design of the solution. Innovation happens when we learn; so the goal is to increase your learning velocity. (31:35) Have you ever been caught in the trap of prioritizing feature requests based on volume? I get it. It's tempting to give the people what they think they want. For example, imagine ten users clamoring for control over specific parameters in your machine learning forecasting model. You could give them that control, thinking you're solving the problem because, hey, that's what they asked for! But did you stop to ask why they want that control? The reasons behind those requests could be wildly different. By simply handing over the keys to all the model parameters, you might be creating a whole new set of problems. Users now face a "usability tax," trying to figure out which parameters to lock and which to let float. The key takeaway? Focus on addressing the frequency that the same problems are occurring across your users, not just the frequency a given tactic or “solution” method (i.e. “model” or “dashboard” or “feature”) appears in a stakeholder or user request. Remember, problems are often disguised as solutions. We've got to dig deeper and uncover the real needs, not just address the symptoms. (36:19)
AM Springfield live from American Legion Post 32 on Sangamon Avenue. Orion Marty, Mark Selvaggio, Kurt Trampe and UIS athletic director Mike Hermann were among the guests. See omnystudio.com/listener for privacy information.
While everyone is now repeating that 2025 is the “Year of the Agent”, OpenAI is heads down building towards it. In the first 2 months of the year they released Operator and Deep Research (arguably the most successful agent archetype so far), and today they are bringing a lot of those capabilities to the API:* Responses API* Web Search Tool* Computer Use Tool* File Search Tool* A new open source Agents SDK with integrated Observability ToolsWe cover all this and more in today's lightning pod on YouTube!More details here:Responses APIIn our Michelle Pokrass episode we talked about the Assistants API needing a redesign. Today OpenAI is launching the Responses API, “a more flexible foundation for developers building agentic applications”. It's a superset of the chat completion API, and the suggested starting point for developers working with OpenAI models. One of the big upgrades is the new set of built-in tools for the responses API: Web Search, Computer Use, and Files. Web Search ToolWe previously had Exa AI on the podcast to talk about web search for AI. OpenAI is also now joining the race; the Web Search API is actually a new “model” that exposes two 4o fine-tunes: gpt-4o-search-preview and gpt-4o-mini-search-preview. These are the same models that power ChatGPT Search, and are priced at $30/1000 queries and $25/1000 queries respectively. The killer feature is inline citations: you do not only get a link to a page, but also a deep link to exactly where your query was answered in the result page. Computer Use ToolThe model that powers Operator, called Computer-Using-Agent (CUA), is also now available in the API. The computer-use-preview model is SOTA on most benchmarks, achieving 38.1% success on OSWorld for full computer use tasks, 58.1% on WebArena, and 87% on WebVoyager for web-based interactions.As you will notice in the docs, `computer-use-preview` is both a model and a tool through which you can specify the environment. Usage is priced at $3/1M input tokens and $12/1M output tokens, and it's currently only available to users in tiers 3-5.File Search ToolFile Search was also available in the Assistants API, and it's now coming to Responses too. OpenAI is bringing search + RAG all under one umbrella, and we'll definitely see more people trying to find new ways to build all-in-one apps on OpenAI. Usage is priced at $2.50 per thousand queries and file storage at $0.10/GB/day, with the first GB free.Agent SDK: Swarms++!https://github.com/openai/openai-agents-pythonTo bring it all together, after the viral reception to Swarm, OpenAI is releasing an officially supported agents framework (which was previewed at our AI Engineer Summit) with 4 core pieces:* Agents: Easily configurable LLMs with clear instructions and built-in tools.* Handoffs: Intelligently transfer control between agents.* Guardrails: Configurable safety checks for input and output validation.* Tracing & Observability: Visualize agent execution traces to debug and optimize performance.Multi-agent workflows are here to stay!OpenAI is now explicitly designs for a set of common agentic patterns: Workflows, Handoffs, Agents-as-Tools, LLM-as-a-Judge, Parallelization, and Guardrails. OpenAI previewed this in part 2 of their talk at NYC:Further coverage of the launch from Kevin Weil, WSJ, and OpenAIDevs, AMA here.Show Notes* Assistants API* Swarm (OpenAI)* Fine-Tuning in AI* 2024 OpenAI DevDay Recap with Romain* Michelle Pokrass episode (API lead)Timestamps* 00:00 Intros* 02:31 Responses API * 08:34 Web Search API * 17:14 Files Search API * 18:46 Files API vs RAG * 20:06 Computer Use / Operator API * 22:30 Agents SDKAnd of course you can catch up with the full livestream here:TranscriptAlessio [00:00:03]: Hey, everyone. Welcome back to another Latent Space Lightning episode. This is Alessio, partner and CTO at Decibel, and I'm joined by Swyx, founder of Small AI.swyx [00:00:11]: Hi, and today we have a super special episode because we're talking with our old friend Roman. Hi, welcome.Romain [00:00:19]: Thank you. Thank you for having me.swyx [00:00:20]: And Nikunj, who is most famously, if anyone has ever tried to get any access to anything on the API, Nikunj is the guy. So I know your emails because I look forward to them.Nikunj [00:00:30]: Yeah, nice to meet all of you.swyx [00:00:32]: I think that we're basically convening today to talk about the new API. So perhaps you guys want to just kick off. What is OpenAI launching today?Nikunj [00:00:40]: Yeah, so I can kick it off. We're launching a bunch of new things today. We're going to do three new built-in tools. So we're launching the web search tool. This is basically chat GPD for search, but available in the API. We're launching an improved file search tool. So this is you bringing your data to OpenAI. You upload it. We, you know, take care of parsing it, chunking it. We're embedding it, making it searchable, give you this like ready vector store that you can use. So that's the file search tool. And then we're also launching our computer use tool. So this is the tool behind the operator product in chat GPD. So that's coming to developers today. And to support all of these tools, we're going to have a new API. So, you know, we launched chat completions, like I think March 2023 or so. It's been a while. So we're looking for an update over here to support all the new things that the models can do. And so we're launching this new API. It is, you know, it works with tools. We think it'll be like a great option for all the future agentic products that we build. And so that is also launching today. Actually, the last thing we're launching is the agents SDK. We launched this thing called Swarm last year where, you know, it was an experimental SDK for people to do multi-agent orchestration and stuff like that. It was supposed to be like educational experimental, but like people, people really loved it. They like ate it up. And so we are like, all right, let's, let's upgrade this thing. Let's give it a new name. And so we're calling it the agents SDK. It's going to have built-in tracing in the OpenAI dashboard. So lots of cool stuff going out. So, yeah.Romain [00:02:14]: That's a lot, but we said 2025 was the year of agents. So there you have it, like a lot of new tools to build these agents for developers.swyx [00:02:20]: Okay. I guess, I guess we'll just kind of go one by one and we'll leave the agents SDK towards the end. So responses API, I think the sort of primary concern that people have and something I think I've voiced to you guys when, when, when I was talking with you in the, in the planning process was, is chat completions going away? So I just wanted to let it, let you guys respond to the concerns that people might have.Romain [00:02:41]: Chat completion is definitely like here to stay, you know, it's a bare metal API we've had for quite some time. Lots of tools built around it. So we want to make sure that it's maintained and people can confidently keep on building on it. At the same time, it was kind of optimized for a different world, right? It was optimized for a pre-multi-modality world. We also optimized for kind of single turn. It takes two problems. It takes prompt in, it takes response out. And now with these agentic workflows, we, we noticed that like developers and companies want to build longer horizon tasks, you know, like things that require multiple returns to get the task accomplished. And computer use is one of those, for instance. And so that's why the responses API came to life to kind of support these new agentic workflows. But chat completion is definitely here to stay.swyx [00:03:27]: And assistance API, we've, uh, has a target sunset date of first half of 2020. So this is kind of like, in my mind, there was a kind of very poetic mirroring of the API with the models. This, I kind of view this as like kind of the merging of assistance API and chat completions, right. Into one unified responses. So it's kind of like how GPT and the old series models are also unifying.Romain [00:03:48]: Yeah, that's exactly the right, uh, that's the right framing, right? Like, I think we took the best of what we learned from the assistance API, especially like being able to access tools very, uh, very like conveniently, but at the same time, like simplifying the way you have to integrate, like, you no longer have to think about six different objects to kind of get access to these tools with the responses API. You just get one API request and suddenly you can weave in those tools, right?Nikunj [00:04:12]: Yeah, absolutely. And I think we're going to make it really easy and straightforward for assistance API users to migrate over to responsive. Right. To the API without any loss of functionality or data. So our plan is absolutely to add, you know, assistant like objects and thread light objects to that, that work really well with the responses API. We'll also add like the code interpreter tool, which is not launching today, but it'll come soon. And, uh, we'll add async mode to responses API, because that's another difference with, with, uh, assistance. I will have web hooks and stuff like that, but I think it's going to be like a pretty smooth transition. Uh, once we have all of that in place. And we'll be. Like a full year to migrate and, and help them through any issues they, they, they face. So overall, I feel like assistance users are really going to benefit from this longer term, uh, with this more flexible, primitive.Alessio [00:05:01]: How should people think about when to use each type of API? So I know that in the past, the assistance was maybe more stateful, kind of like long running, many tool use kind of like file based things. And the chat completions is more stateless, you know, kind of like traditional completion API. Is that still the mental model that people should have? Or like, should you buy the.Nikunj [00:05:20]: So the responses API is going to support everything that it's at launch, going to support everything that chat completion supports, and then over time, it's going to support everything that assistance supports. So it's going to be a pretty good fit for anyone starting out with open AI. Uh, they should be able to like go to responses responses, by the way, also has a stateless mode, so you can pass in store false and they'll make the whole API stateless, just like chat completions. You're really trying to like get this unification. A story in so that people don't have to juggle multiple endpoints. That being said, like chat completions, just like the most widely adopted API, it's it's so popular. So we're still going to like support it for years with like new models and features. But if you're a new user, you want to or if you want to like existing, you want to tap into some of these like built in tools or something, you should feel feel totally fine migrating to responses and you'll have more capabilities and performance than the tech completions.swyx [00:06:16]: I think the messaging that I agree that I think resonated the most. When I talked to you was that it is a strict superset, right? Like you should be able to do everything that you could do in chat completions and with assistants. And the thing that I just assumed that because you're you're now, you know, by default is stateful, you're actually storing the chat logs or the chat state. I thought you'd be charging me for it. So, you know, to me, it was very surprising that you figured out how to make it free.Nikunj [00:06:43]: Yeah, it's free. We store your state for 30 days. You can turn it off. But yeah, it's it's free. And the interesting thing on state is that it just like makes particularly for me, it makes like debugging things and building things so much simpler, where I can like create a responses object that's like pretty complicated and part of this more complex application that I've built, I can just go into my dashboard and see exactly what happened that mess up my prompt that is like not called one of these tools that misconfigure one of the tools like the visual observability of everything that you're doing is so, so helpful. So I'm excited, like about people trying that out and getting benefits from it, too.swyx [00:07:19]: Yeah, it's a it's really, I think, a really nice to have. But all I'll say is that my friend Corey Quinn says that anything that can be used as a database will be used as a database. So be prepared for some abuse.Romain [00:07:34]: All right. Yeah, that's a good one. Some of that I've tried with the metadata. That's some people are very, very creative at stuffing data into an object. Yeah.Nikunj [00:07:44]: And we do have metadata with responses. Exactly. Yeah.Alessio [00:07:48]: Let's get through it. All of these. So web search. I think the when I first said web search, I thought you were going to just expose a API that then return kind of like a nice list of thing. But the way it's name is like GPD for all search preview. So I'm guessing you have you're using basically the same model that is in the chat GPD search, which is fine tune for search. I'm guessing it's a different model than the base one. And it's impressive the jump in performance. So just to give an example, in simple QA, GPD for all is 38% accuracy for all search is 90%. But we always talk about. How tools are like models is not everything you need, like tools around it are just as important. So, yeah, maybe give people a quick review on like the work that went into making this special.Nikunj [00:08:29]: Should I take that?Alessio [00:08:29]: Yeah, go for it.Nikunj [00:08:30]: So firstly, we're launching web search in two ways. One in responses API, which is our API for tools. It's going to be available as a web search tool itself. So you'll be able to go tools, turn on web search and you're ready to go. We still wanted to give chat completions people access to real time information. So in that. Chat completions API, which does not support built in tools. We're launching the direct access to the fine tuned model that chat GPD for search uses, and we call it GPD for search preview. And how is this model built? Basically, we have our search research team has been working on this for a while. Their main goal is to, like, get information, like get a bunch of information from all of our data sources that we use to gather information for search and then pick the right things and then cite them. As accurately as possible. And that's what the search team has really focused on. They've done some pretty cool stuff. They use like synthetic data techniques. They've done like all series model distillation to, like, make these four or fine tunes really good. But yeah, the main thing is, like, can it remain factual? Can it answer questions based on what it retrieves and get cited accurately? And that's what this like fine tune model really excels at. And so, yeah, so we're excited that, like, it's going to be directly available in chat completions along with being available as a tool. Yeah.Alessio [00:09:49]: Just to clarify, if I'm using the responses API, this is a tool. But if I'm using chat completions, I have to switch model. I cannot use 01 and call search as a tool. Yeah, that's right. Exactly.Romain [00:09:58]: I think what's really compelling, at least for me and my own uses of it so far, is that when you use, like, web search as a tool, it combines nicely with every other tool and every other feature of the platform. So think about this for a second. For instance, imagine you have, like, a responses API call with the web search tool, but suddenly you turn on function calling. You also turn on, let's say, structure. So you can have, like, the ability to structure any data from the web in real time in the JSON schema that you need for your application. So it's quite powerful when you start combining those features and tools together. It's kind of like an API for the Internet almost, you know, like you get, like, access to the precise schema you need for your app. Yeah.Alessio [00:10:39]: And then just to wrap up on the infrastructure side of it, I read on the post that people, publisher can choose to appear in the web search. So are people by default in it? Like, how can we get Latent Space in the web search API?Nikunj [00:10:53]: Yeah. Yeah. I think we have some documentation around how websites, publishers can control, like, what shows up in a web search tool. And I think you should be able to, like, read that. I think we should be able to get Latent Space in for sure. Yeah.swyx [00:11:10]: You know, I think so. I compare this to a broader trend that I started covering last year of online LLMs. Actually, Perplexity, I think, was the first. It was the first to say, to offer an API that is connected to search, and then Gemini had the sort of search grounding API. And I think you guys, I actually didn't, I missed this in the original reading of the docs, but you even give like citations with like the exact sub paragraph that is matching, which I think is the standard nowadays. I think my question is, how do we take what a knowledge cutoff is for something like this, right? Because like now, basically there's no knowledge cutoff is always live, but then there's a difference between what the model has sort of internalized in its back propagation and what is searching up its rag.Romain [00:11:53]: I think it kind of depends on the use case, right? And what you want to showcase as the source. Like, for instance, you take a company like Hebbia that has used this like web search tool. They can combine like for credit firms or law firms, they can find like, you know, public information from the internet with the live sources and citation that sometimes you do want to have access to, as opposed to like the internal knowledge. But if you're building something different, well, like, you just want to have the information. If you want to have an assistant that relies on the deep knowledge that the model has, you may not need to have these like direct citations. So I think it kind of depends on the use case a little bit, but there are many, uh, many companies like Hebbia that will need that access to these citations to precisely know where the information comes from.swyx [00:12:34]: Yeah, yeah, uh, for sure. And then one thing on the, on like the breadth, you know, I think a lot of the deep research, open deep research implementations have this sort of hyper parameter about, you know, how deep they're searching and how wide they're searching. I don't see that in the docs. But is that something that we can tune? Is that something you recommend thinking about?Nikunj [00:12:53]: Super interesting. It's definitely not a parameter today, but we should explore that. It's very interesting. I imagine like how you would do it with the web search tool and responsive API is you would have some form of like, you know, agent orchestration over here where you have a planning step and then each like web search call that you do like explicitly goes a layer deeper and deeper and deeper. But it's not a parameter that's available out of the box. But it's a cool. It's a cool thing to think about. Yeah.swyx [00:13:19]: The only guidance I'll offer there is a lot of these implementations offer top K, which is like, you know, top 10, top 20, but actually don't really want that. You want like sort of some kind of similarity cutoff, right? Like some matching score cuts cutoff, because if there's only five things, five documents that match fine, if there's 500 that match, maybe that's what I want. Right. Yeah. But also that might, that might make my costs very unpredictable because the costs are something like $30 per a thousand queries, right? So yeah. Yeah.Nikunj [00:13:49]: I guess you could, you could have some form of like a context budget and then you're like, go as deep as you can and pick the best stuff and put it into like X number of tokens. There could be some creative ways of, of managing cost, but yeah, that's a super interesting thing to explore.Alessio [00:14:05]: Do you see people using the files and the search API together where you can kind of search and then store everything in the file so the next time I'm not paying for the search again and like, yeah, how should people balance that?Nikunj [00:14:17]: That's actually a very interesting question. And let me first tell you about how I've seen a really cool way I've seen people use files and search together is they put their user preferences or memories in the vector store and so a query comes in, you use the file search tool to like get someone's like reading preferences or like fashion preferences and stuff like that, and then you search the web for information or products that they can buy related to those preferences and you then render something beautiful to show them, like, here are five things that you might be interested in. So that's how I've seen like file search, web search work together. And by the way, that's like a single responses API call, which is really cool. So you just like configure these things, go boom, and like everything just happens. But yeah, that's how I've seen like files and web work together.Romain [00:15:01]: But I think that what you're pointing out is like interesting, and I'm sure developers will surprise us as they always do in terms of how they combine these tools and how they might use file search as a way to have memory and preferences, like Nikum says. But I think like zooming out, what I find very compelling and powerful here is like when you have these like neural networks. That have like all of the knowledge that they have today, plus real time access to the Internet for like any kind of real time information that you might need for your app and file search, where you can have a lot of company, private documents, private details, you combine those three, and you have like very, very compelling and precise answers for any kind of use case that your company or your product might want to enable.swyx [00:15:41]: It's a difference between sort of internal documents versus the open web, right? Like you're going to need both. Exactly, exactly. I never thought about it doing memory as well. I guess, again, you know, anything that's a database, you can store it and you will use it as a database. That sounds awesome. But I think also you've been, you know, expanding the file search. You have more file types. You have query optimization, custom re-ranking. So it really seems like, you know, it's been fleshed out. Obviously, I haven't been paying a ton of attention to the file search capability, but it sounds like your team has added a lot of features.Nikunj [00:16:14]: Yeah, metadata filtering was like the main thing people were asking us for for a while. And I'm super excited about it. I mean, it's just so critical once your, like, web store size goes over, you know, more than like, you know, 5,000, 10,000 records, you kind of need that. So, yeah, metadata filtering is coming, too.Romain [00:16:31]: And for most companies, it's also not like a competency that you want to rebuild in-house necessarily, you know, like, you know, thinking about embeddings and chunking and, you know, how of that, like, it sounds like very complex for something very, like, obvious to ship for your users. Like companies like Navant, for instance. They were able to build with the file search, like, you know, take all of the FAQ and travel policies, for instance, that you have, you, you put that in file search tool, and then you don't have to think about anything. Now your assistant becomes naturally much more aware of all of these policies from the files.swyx [00:17:03]: The question is, like, there's a very, very vibrant RAG industry already, as you well know. So there's many other vector databases, many other frameworks. Probably if it's an open source stack, I would say like a lot of the AI engineers that I talk to want to own this part of the stack. And it feels like, you know, like, when should we DIY and when should we just use whatever OpenAI offers?Nikunj [00:17:24]: Yeah. I mean, like, if you're doing something completely from scratch, you're going to have more control, right? Like, so super supportive of, you know, people trying to, like, roll up their sleeves, build their, like, super custom chunking strategy and super custom retrieval strategy and all of that. And those are things that, like, will be harder to do with OpenAI tools. OpenAI tool has, like, we have an out-of-the-box solution. We give you the tools. We use some knobs to customize things, but it's more of, like, a managed RAG service. So my recommendation would be, like, start with the OpenAI thing, see if it, like, meets your needs. And over time, we're going to be adding more and more knobs to make it even more customizable. But, you know, if you want, like, the completely custom thing, you want control over every single thing, then you'd probably want to go and hand roll it using other solutions. So we're supportive of both, like, engineers should pick. Yeah.Alessio [00:18:16]: And then we got computer use. Which I think Operator was obviously one of the hot releases of the year. And we're only two months in. Let's talk about that. And that's also, it seems like a separate model that has been fine-tuned for Operator that has browser access.Nikunj [00:18:31]: Yeah, absolutely. I mean, the computer use models are exciting. The cool thing about computer use is that we're just so, so early. It's like the GPT-2 of computer use or maybe GPT-1 of computer use right now. But it is a separate model that has been, you know, the computer. The computer use team has been working on, you send it screenshots and it tells you what action to take. So the outputs of it are almost always tool calls and you're inputting screenshots based on whatever computer you're trying to operate.Romain [00:19:01]: Maybe zooming out for a second, because like, I'm sure your audience is like super, super like AI native, obviously. But like, what is computer use as a tool, right? And what's operator? So the idea for computer use is like, how do we let developers also build agents that can complete tasks for the users, but using a computer? Okay. Or a browser instead. And so how do you get that done? And so that's why we have this custom model, like optimized for computer use that we use like for operator ourselves. But the idea behind like putting it as an API is that imagine like now you want to, you want to automate some tasks for your product or your own customers. Then now you can, you can have like the ability to spin up one of these agents that will look at the screen and act on the screen. So that means able, the ability to click, the ability to scroll. The ability to type and to report back on the action. So that's what we mean by computer use and wrapping it as a tool also in the responses API. So now like that gives a hint also at the multi-turned thing that we were hinting at earlier, the idea that like, yeah, maybe one of these actions can take a couple of minutes to complete because there's maybe like 20 steps to complete that task. But now you can.swyx [00:20:08]: Do you think a computer use can play Pokemon?Romain [00:20:11]: Oh, interesting. I guess we tried it. I guess we should try it. You know?swyx [00:20:17]: Yeah. There's a lot of interest. I think Pokemon really is a good agent benchmark, to be honest. Like it seems like Claude is, Claude is running into a lot of trouble.Romain [00:20:25]: Sounds like we should make that a new eval, it looks like.swyx [00:20:28]: Yeah. Yeah. Oh, and then one more, one more thing before we move on to agents SDK. I know you have a hard stop. There's all these, you know, blah, blah, dash preview, right? Like search preview, computer use preview, right? And you see them all like fine tunes of 4.0. I think the question is, are we, are they all going to be merged into the main branch or are we basically always going to have subsets? Of these models?Nikunj [00:20:49]: Yeah, I think in the early days, research teams at OpenAI like operate with like fine tune models. And then once the thing gets like more stable, we sort of merge it into the main line. So that's definitely the vision, like going out of preview as we get more comfortable with and learn about all the developer use cases and we're doing a good job at them. We'll sort of like make them part of like the core models so that you don't have to like deal with the bifurcation.Romain [00:21:12]: You should think of it this way as exactly what happened last year when we introduced vision capabilities, you know. Yes. Vision capabilities were in like a vision preview model based off of GPT-4 and then vision capabilities now are like obviously built into GPT-4.0. You can think about it the same way for like the other modalities like audio and those kind of like models, like optimized for search and computer use.swyx [00:21:34]: Agents SDK, we have a few minutes left. So let's just assume that everyone has looked at Swarm. Sure. I think that Swarm has really popularized the handoff technique, which I thought was like, you know, really, really interesting for sort of a multi-agent. What is new with the SDK?Nikunj [00:21:50]: Yeah. Do you want to start? Yeah, for sure. So we've basically added support for types. We've made this like a lot. Yeah. Like we've added support for types. We've added support for guard railing, which is a very common pattern. So in the guardrail example, you basically have two things happen in parallel. The guardrail can sort of block the execution. It's a type of like optimistic generation that happens. And I think we've added support for tracing. So I think that's really cool. So you can basically look at the traces that the Agents SDK creates in the OpenAI dashboard. We also like made this pretty flexible. So you can pick any API from any provider that supports the ChatCompletions API format. So it supports responses by default, but you can like easily plug it in to anyone that uses the ChatCompletions API. And similarly, on the tracing side, you can support like multiple tracing providers. By default, it sort of points to the OpenAI dashboard. But, you know, there's like so many tracing providers. There's so many tracing companies out there. And we'll announce some partnerships on that front, too. So just like, you know, adding lots of core features and making it more usable, but still centered around like handoffs is like the main, main concept.Romain [00:22:59]: And by the way, it's interesting, right? Because Swarm just came to life out of like learning from customers directly that like orchestrating agents in production was pretty hard. You know, simple ideas could quickly turn very complex. Like what are those guardrails? What are those handoffs, et cetera? So that came out of like learning from customers. And it was initially shipped. It was not as a like low-key experiment, I'd say. But we were kind of like taken by surprise at how much momentum there was around this concept. And so we decided to learn from that and embrace it. To be like, okay, maybe we should just embrace that as a core primitive of the OpenAI platform. And that's kind of what led to the Agents SDK. And I think now, as Nikuj mentioned, it's like adding all of these new capabilities to it, like leveraging the handoffs that we had, but tracing also. And I think what's very compelling for developers is like instead of having one agent to rule them all and you stuff like a lot of tool calls in there that can be hard to monitor, now you have the tools you need to kind of like separate the logic, right? And you can have a triage agent that based on an intent goes to different kind of agents. And then on the OpenAI dashboard, we're releasing a lot of new user interface logs as well. So you can see all of the tracing UIs. Essentially, you'll be able to troubleshoot like what exactly happened. In that workflow, when the triage agent did a handoff to a secondary agent and the third and see the tool calls, et cetera. So we think that the Agents SDK combined with the tracing UIs will definitely help users and developers build better agentic workflows.Alessio [00:24:28]: And just before we wrap, are you thinking of connecting this with also the RFT API? Because I know you already have, you kind of store my text completions and then I can do fine tuning of that. Is that going to be similar for agents where you're storing kind of like my traces? And then help me improve the agents?Nikunj [00:24:43]: Yeah, absolutely. Like you got to tie the traces to the evals product so that you can generate good evals. Once you have good evals and graders and tasks, you can use that to do reinforcement fine tuning. And, you know, lots of details to be figured out over here. But that's the vision. And I think we're going to go after it like pretty hard and hope we can like make this whole workflow a lot easier for developers.Alessio [00:25:05]: Awesome. Thank you so much for the time. I'm sure you'll be busy on Twitter tomorrow with all the developer feedback. Yeah.Romain [00:25:12]: Thank you so much for having us. And as always, we can't wait to see what developers will build with these tools and how we can like learn as quickly as we can from them to make them even better over time.Nikunj [00:25:21]: Yeah.Romain [00:25:22]: Thank you, guys.Nikunj [00:25:23]: Thank you.Romain [00:25:23]: Thank you both. Awesome. Get full access to Latent.Space at www.latent.space/subscribe
In dieser Episode sprechen Frede und Matthias über die größten Herausforderungen und Chancen für UX-Profis, denn die Branche wandelt sich rasant. Während KI, Entlassungen und der Kampf um Rollen den Jobmarkt prägen, bleibt die Unsicherheit groß. Gleichzeitig stellt sich die Frage, ob der ROI UX stärkt oder Kreativität erstickt. Zum Schluss wird es futuristisch, denn Neurotechnologie und Brain-Computer-Interfaces könnten UX völlig neu definieren. | ⏳ ZEITSTEMPEL | [00:00:00] Begrüßung & neues Episodenformat | [00:03:07] Satya Nadellas Vision: UX ohne klassische Benutzeroberflächen? | [00:06:14] KI & Automatisierung: UX-DesignerInnen als OrchestratorInnen von Agenten | [00:12:32] UX-Jobmarkt 2025: Entlassungen, KI-Konkurrenz & neue Skills | [00:24:46] UX in Produktteams: Wird UX als eigenständige Disziplin verschwinden? | [00:31:21] ROI von UX: Warum sich UX-Profis mit Business-Impact befassen müssen | [00:37:39] Hat ROI die UX-Qualität getötet? Kreativität vs. Wirtschaftlichkeit | [00:38:20] Neurotechnologie & UX: Brain-Computer-Interfaces als neue Herausforderung | [00:45:08] Fazit: UX verschwindet nicht - es verändert sich! | | |
Steve Sykes, the 2024-2025 Our Stage / Our Voices Artist-in-Residence at UIS, is working to make participation and enjoyment of theater more accessible for the disabled community. The Our Stage/Our Voices program seeks to provide opportunities for performing artists who are members of historically underrepresented communities to address the issues of inequality in the regional performing arts with the goal of achieving diversity and inclusion. Steve spoke to Community Voices about his inclusive improv workshop, insights on theater accessibility, and his upcoming production of Almost, Maine, featuring abled and disabled performers. The show will be performed March 13-16 in the UIS Studio Theatre. Tickets and more information can be found here.
Industrial Talk is onsite at IoT Solutions World Congress and talking to Emmanuel Penzes, Sr. Solutions Engineer with The Qt Company about "The power behind improving the user interface". Scott MacKenzie hosts Emmanuel Penzes, a solution engineer at the Qt Group, on the Industrial Talk Podcast. Emmanuel discusses the importance of user interfaces (UI) in software development, emphasizing simplicity and functionality. He highlights the role of designers over developers in UI design and the use of analytics to improve user experience. Emmanuel also touches on the need for flexibility in UI design to accommodate updates and hardware capabilities. Looking ahead, he sees potential in autonomous driving and digital twins. Scott stresses the critical role of well-designed UIs in user engagement and encourages listeners to connect with Emmanuel and attend the IoT Solutions World Congress in Barcelona. Action Items [ ] Reach out to Emmanuel Penzes on LinkedIn to continue the discussion on UI design best practices. [ ] Explore the Qt Group's tools and solutions for creating optimized user interfaces, especially for IoT and embedded systems. [ ] Consider incorporating user feedback and analytics into the UI design process to ensure a smooth and intuitive user experience. Outline Introduction and Welcome to the Podcast Scott MacKenzie welcomes listeners to the Industrial Talk Podcast, emphasizing the focus on industry professionals and their innovations. Scott thanks the audience for their continued support and highlights the importance of celebrating industry professionals. The podcast is broadcasting from the IoT Solutions World Congress in Barcelona, Spain, and Scott encourages listeners to attend next year. Scott introduces Emmanuel from the Qt Group, who will discuss user interfaces and their importance. Emmanuel's Background and Role Emmanuel shares his background, mentioning his origins in France and his current residence in Paris. Emmanuel has a decade of experience as a software developer, including a four-year stint in Australia to improve his English. He has been with the Qt Group for the past five years, working as a solution engineer to help customers find the right software solutions. Scott expresses his passion for well-designed user interfaces and asks Emmanuel to explain their importance. Defining a Good User Interface Emmanuel explains that a good user interface is one that is easy and simple to use, minimizing frustration for the user. He emphasizes the need to balance functionality without overcomplicating the interface, as too many features can confuse users. Scott shares his personal frustration with complex user interfaces and pop-ups, highlighting the importance of simplicity. Emmanuel discusses the role of designers in creating user interfaces, separate from developers who focus on coding. Balancing Functionality and User Experience Emmanuel talks about the importance of using dedicated designers for user interfaces and user experience. He mentions the use of tools and analytics to gather customer feedback and improve user interfaces. Scott asks about the process of guiding clients through the design process, emphasizing the need for flexibility and responsiveness. Emmanuel explains the importance of considering hardware limitations and ensuring smooth performance to avoid frustration. Future Trends and Automotive Applications Scott and Emmanuel discuss the
Today's episode is with Paul Klein, founder of Browserbase. We talked about building browser infrastructure for AI agents, the future of agent authentication, and their open source framework Stagehand.* [00:00:00] Introductions* [00:04:46] AI-specific challenges in browser infrastructure* [00:07:05] Multimodality in AI-Powered Browsing* [00:12:26] Running headless browsers at scale* [00:18:46] Geolocation when proxying* [00:21:25] CAPTCHAs and Agent Auth* [00:28:21] Building “User take over” functionality* [00:33:43] Stagehand: AI web browsing framework* [00:38:58] OpenAI's Operator and computer use agents* [00:44:44] Surprising use cases of Browserbase* [00:47:18] Future of browser automation and market competition* [00:53:11] Being a solo founderTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.swyx [00:00:12]: Hey, and today we are very blessed to have our friends, Paul Klein, for the fourth, the fourth, CEO of Browserbase. Welcome.Paul [00:00:21]: Thanks guys. Yeah, I'm happy to be here. I've been lucky to know both of you for like a couple of years now, I think. So it's just like we're hanging out, you know, with three ginormous microphones in front of our face. It's totally normal hangout.swyx [00:00:34]: Yeah. We've actually mentioned you on the podcast, I think, more often than any other Solaris tenant. Just because like you're one of the, you know, best performing, I think, LLM tool companies that have started up in the last couple of years.Paul [00:00:50]: Yeah, I mean, it's been a whirlwind of a year, like Browserbase is actually pretty close to our first birthday. So we are one years old. And going from, you know, starting a company as a solo founder to... To, you know, having a team of 20 people, you know, a series A, but also being able to support hundreds of AI companies that are building AI applications that go out and automate the web. It's just been like, really cool. It's been happening a little too fast. I think like collectively as an AI industry, let's just take a week off together. I took my first vacation actually two weeks ago, and Operator came out on the first day, and then a week later, DeepSeat came out. And I'm like on vacation trying to chill. I'm like, we got to build with this stuff, right? So it's been a breakneck year. But I'm super happy to be here and like talk more about all the stuff we're seeing. And I'd love to hear kind of what you guys are excited about too, and share with it, you know?swyx [00:01:39]: Where to start? So people, you've done a bunch of podcasts. I think I strongly recommend Jack Bridger's Scaling DevTools, as well as Turner Novak's The Peel. And, you know, I'm sure there's others. So you covered your Twilio story in the past, talked about StreamClub, you got acquired to Mux, and then you left to start Browserbase. So maybe we just start with what is Browserbase? Yeah.Paul [00:02:02]: Browserbase is the web browser for your AI. We're building headless browser infrastructure, which are browsers that run in a server environment that's accessible to developers via APIs and SDKs. It's really hard to run a web browser in the cloud. You guys are probably running Chrome on your computers, and that's using a lot of resources, right? So if you want to run a web browser or thousands of web browsers, you can't just spin up a bunch of lambdas. You actually need to use a secure containerized environment. You have to scale it up and down. It's a stateful system. And that infrastructure is, like, super painful. And I know that firsthand, because at my last company, StreamClub, I was CTO, and I was building our own internal headless browser infrastructure. That's actually why we sold the company, is because Mux really wanted to buy our headless browser infrastructure that we'd built. And it's just a super hard problem. And I actually told my co-founders, I would never start another company unless it was a browser infrastructure company. And it turns out that's really necessary in the age of AI, when AI can actually go out and interact with websites, click on buttons, fill in forms. You need AI to do all of that work in an actual browser running somewhere on a server. And BrowserBase powers that.swyx [00:03:08]: While you're talking about it, it occurred to me, not that you're going to be acquired or anything, but it occurred to me that it would be really funny if you became the Nikita Beer of headless browser companies. You just have one trick, and you make browser companies that get acquired.Paul [00:03:23]: I truly do only have one trick. I'm screwed if it's not for headless browsers. I'm not a Go programmer. You know, I'm in AI grant. You know, browsers is an AI grant. But we were the only company in that AI grant batch that used zero dollars on AI spend. You know, we're purely an infrastructure company. So as much as people want to ask me about reinforcement learning, I might not be the best guy to talk about that. But if you want to ask about headless browser infrastructure at scale, I can talk your ear off. So that's really my area of expertise. And it's a pretty niche thing. Like, nobody has done what we're doing at scale before. So we're happy to be the experts.swyx [00:03:59]: You do have an AI thing, stagehand. We can talk about the sort of core of browser-based first, and then maybe stagehand. Yeah, stagehand is kind of the web browsing framework. Yeah.What is Browserbase? Headless Browser Infrastructure ExplainedAlessio [00:04:10]: Yeah. Yeah. And maybe how you got to browser-based and what problems you saw. So one of the first things I worked on as a software engineer was integration testing. Sauce Labs was kind of like the main thing at the time. And then we had Selenium, we had Playbrite, we had all these different browser things. But it's always been super hard to do. So obviously you've worked on this before. When you started browser-based, what were the challenges? What were the AI-specific challenges that you saw versus, there's kind of like all the usual running browser at scale in the cloud, which has been a problem for years. What are like the AI unique things that you saw that like traditional purchase just didn't cover? Yeah.AI-specific challenges in browser infrastructurePaul [00:04:46]: First and foremost, I think back to like the first thing I did as a developer, like as a kid when I was writing code, I wanted to write code that did stuff for me. You know, I wanted to write code to automate my life. And I do that probably by using curl or beautiful soup to fetch data from a web browser. And I think I still do that now that I'm in the cloud. And the other thing that I think is a huge challenge for me is that you can't just create a web site and parse that data. And we all know that now like, you know, taking HTML and plugging that into an LLM, you can extract insights, you can summarize. So it was very clear that now like dynamic web scraping became very possible with the rise of large language models or a lot easier. And that was like a clear reason why there's been more usage of headless browsers, which are necessary because a lot of modern websites don't expose all of their page content via a simple HTTP request. You know, they actually do require you to run this type of code for a specific time. JavaScript on the page to hydrate this. Airbnb is a great example. You go to airbnb.com. A lot of that content on the page isn't there until after they run the initial hydration. So you can't just scrape it with a curl. You need to have some JavaScript run. And a browser is that JavaScript engine that's going to actually run all those requests on the page. So web data retrieval was definitely one driver of starting BrowserBase and the rise of being able to summarize that within LLM. Also, I was familiar with if I wanted to automate a website, I could write one script and that would work for one website. It was very static and deterministic. But the web is non-deterministic. The web is always changing. And until we had LLMs, there was no way to write scripts that you could write once that would run on any website. That would change with the structure of the website. Click the login button. It could mean something different on many different websites. And LLMs allow us to generate code on the fly to actually control that. So I think that rise of writing the generic automation scripts that can work on many different websites, to me, made it clear that browsers are going to be a lot more useful because now you can automate a lot more things without writing. If you wanted to write a script to book a demo call on 100 websites, previously, you had to write 100 scripts. Now you write one script that uses LLMs to generate that script. That's why we built our web browsing framework, StageHand, which does a lot of that work for you. But those two things, web data collection and then enhanced automation of many different websites, it just felt like big drivers for more browser infrastructure that would be required to power these kinds of features.Alessio [00:07:05]: And was multimodality also a big thing?Paul [00:07:08]: Now you can use the LLMs to look, even though the text in the dome might not be as friendly. Maybe my hot take is I was always kind of like, I didn't think vision would be as big of a driver. For UI automation, I felt like, you know, HTML is structured text and large language models are good with structured text. But it's clear that these computer use models are often vision driven, and they've been really pushing things forward. So definitely being multimodal, like rendering the page is required to take a screenshot to give that to a computer use model to take actions on a website. And it's just another win for browser. But I'll be honest, that wasn't what I was thinking early on. I didn't even think that we'd get here so fast with multimodality. I think we're going to have to get back to multimodal and vision models.swyx [00:07:50]: This is one of those things where I forgot to mention in my intro that I'm an investor in Browserbase. And I remember that when you pitched to me, like a lot of the stuff that we have today, we like wasn't on the original conversation. But I did have my original thesis was something that we've talked about on the podcast before, which is take the GPT store, the custom GPT store, all the every single checkbox and plugin is effectively a startup. And this was the browser one. I think the main hesitation, I think I actually took a while to get back to you. The main hesitation was that there were others. Like you're not the first hit list browser startup. It's not even your first hit list browser startup. There's always a question of like, will you be the category winner in a place where there's a bunch of incumbents, to be honest, that are bigger than you? They're just not targeted at the AI space. They don't have the backing of Nat Friedman. And there's a bunch of like, you're here in Silicon Valley. They're not. I don't know.Paul [00:08:47]: I don't know if that's, that was it, but like, there was a, yeah, I mean, like, I think I tried all the other ones and I was like, really disappointed. Like my background is from working at great developer tools, companies, and nothing had like the Vercel like experience. Um, like our biggest competitor actually is partly owned by private equity and they just jacked up their prices quite a bit. And the dashboard hasn't changed in five years. And I actually used them at my last company and tried them and I was like, oh man, like there really just needs to be something that's like the experience of these great infrastructure companies, like Stripe, like clerk, like Vercel that I use in love, but oriented towards this kind of like more specific category, which is browser infrastructure, which is really technically complex. Like a lot of stuff can go wrong on the internet when you're running a browser. The internet is very vast. There's a lot of different configurations. Like there's still websites that only work with internet explorer out there. How do you handle that when you're running your own browser infrastructure? These are the problems that we have to think about and solve at BrowserBase. And it's, it's certainly a labor of love, but I built this for me, first and foremost, I know it's super cheesy and everyone says that for like their startups, but it really, truly was for me. If you look at like the talks I've done even before BrowserBase, and I'm just like really excited to try and build a category defining infrastructure company. And it's, it's rare to have a new category of infrastructure exists. We're here in the Chroma offices and like, you know, vector databases is a new category of infrastructure. Is it, is it, I mean, we can, we're in their office, so, you know, we can, we can debate that one later. That is one.Multimodality in AI-Powered Browsingswyx [00:10:16]: That's one of the industry debates.Paul [00:10:17]: I guess we go back to the LLMOS talk that Karpathy gave way long ago. And like the browser box was very clearly there and it seemed like the people who were building in this space also agreed that browsers are a core primitive of infrastructure for the LLMOS that's going to exist in the future. And nobody was building something there that I wanted to use. So I had to go build it myself.swyx [00:10:38]: Yeah. I mean, exactly that talk that, that honestly, that diagram, every box is a startup and there's the code box and then there's the. The browser box. I think at some point they will start clashing there. There's always the question of the, are you a point solution or are you the sort of all in one? And I think the point solutions tend to win quickly, but then the only ones have a very tight cohesive experience. Yeah. Let's talk about just the hard problems of browser base you have on your website, which is beautiful. Thank you. Was there an agency that you used for that? Yeah. Herb.paris.Paul [00:11:11]: They're amazing. Herb.paris. Yeah. It's H-E-R-V-E. I highly recommend for developers. Developer tools, founders to work with consumer agencies because they end up building beautiful things and the Parisians know how to build beautiful interfaces. So I got to give prep.swyx [00:11:24]: And chat apps, apparently are, they are very fast. Oh yeah. The Mistral chat. Yeah. Mistral. Yeah.Paul [00:11:31]: Late chat.swyx [00:11:31]: Late chat. And then your videos as well, it was professionally shot, right? The series A video. Yeah.Alessio [00:11:36]: Nico did the videos. He's amazing. Not the initial video that you shot at the new one. First one was Austin.Paul [00:11:41]: Another, another video pretty surprised. But yeah, I mean, like, I think when you think about how you talk about your company. You have to think about the way you present yourself. It's, you know, as a developer, you think you evaluate a company based on like the API reliability and the P 95, but a lot of developers say, is the website good? Is the message clear? Do I like trust this founder? I'm building my whole feature on. So I've tried to nail that as well as like the reliability of the infrastructure. You're right. It's very hard. And there's a lot of kind of foot guns that you run into when running headless browsers at scale. Right.Competing with Existing Headless Browser Solutionsswyx [00:12:10]: So let's pick one. You have eight features here. Seamless integration. Scalability. Fast or speed. Secure. Observable. Stealth. That's interesting. Extensible and developer first. What comes to your mind as like the top two, three hardest ones? Yeah.Running headless browsers at scalePaul [00:12:26]: I think just running headless browsers at scale is like the hardest one. And maybe can I nerd out for a second? Is that okay? I heard this is a technical audience, so I'll talk to the other nerds. Whoa. They were listening. Yeah. They're upset. They're ready. The AGI is angry. Okay. So. So how do you run a browser in the cloud? Let's start with that, right? So let's say you're using a popular browser automation framework like Puppeteer, Playwright, and Selenium. Maybe you've written a code, some code locally on your computer that opens up Google. It finds the search bar and then types in, you know, search for Latent Space and hits the search button. That script works great locally. You can see the little browser open up. You want to take that to production. You want to run the script in a cloud environment. So when your laptop is closed, your browser is doing something. The browser is doing something. Well, I, we use Amazon. You can see the little browser open up. You know, the first thing I'd reach for is probably like some sort of serverless infrastructure. I would probably try and deploy on a Lambda. But Chrome itself is too big to run on a Lambda. It's over 250 megabytes. So you can't easily start it on a Lambda. So you maybe have to use something like Lambda layers to squeeze it in there. Maybe use a different Chromium build that's lighter. And you get it on the Lambda. Great. It works. But it runs super slowly. It's because Lambdas are very like resource limited. They only run like with one vCPU. You can run one process at a time. Remember, Chromium is super beefy. It's barely running on my MacBook Air. I'm still downloading it from a pre-run. Yeah, from the test earlier, right? I'm joking. But it's big, you know? So like Lambda, it just won't work really well. Maybe it'll work, but you need something faster. Your users want something faster. Okay. Well, let's put it on a beefier instance. Let's get an EC2 server running. Let's throw Chromium on there. Great. Okay. I can, that works well with one user. But what if I want to run like 10 Chromium instances, one for each of my users? Okay. Well, I might need two EC2 instances. Maybe 10. All of a sudden, you have multiple EC2 instances. This sounds like a problem for Kubernetes and Docker, right? Now, all of a sudden, you're using ECS or EKS, the Kubernetes or container solutions by Amazon. You're spending up and down containers, and you're spending a whole engineer's time on kind of maintaining this stateful distributed system. Those are some of the worst systems to run because when it's a stateful distributed system, it means that you are bound by the connections to that thing. You have to keep the browser open while someone is working with it, right? That's just a painful architecture to run. And there's all this other little gotchas with Chromium, like Chromium, which is the open source version of Chrome, by the way. You have to install all these fonts. You want emojis working in your browsers because your vision model is looking for the emoji. You need to make sure you have the emoji fonts. You need to make sure you have all the right extensions configured, like, oh, do you want ad blocking? How do you configure that? How do you actually record all these browser sessions? Like it's a headless browser. You can't look at it. So you need to have some sort of observability. Maybe you're recording videos and storing those somewhere. It all kind of adds up to be this just giant monster piece of your project when all you wanted to do was run a lot of browsers in production for this little script to go to google.com and search. And when I see a complex distributed system, I see an opportunity to build a great infrastructure company. And we really abstract that away with Browserbase where our customers can use these existing frameworks, Playwright, Publisher, Selenium, or our own stagehand and connect to our browsers in a serverless-like way. And control them, and then just disconnect when they're done. And they don't have to think about the complex distributed system behind all of that. They just get a browser running anywhere, anytime. Really easy to connect to.swyx [00:15:55]: I'm sure you have questions. My standard question with anything, so essentially you're a serverless browser company, and there's been other serverless things that I'm familiar with in the past, serverless GPUs, serverless website hosting. That's where I come from with Netlify. One question is just like, you promised to spin up thousands of servers. You promised to spin up thousands of browsers in milliseconds. I feel like there's no real solution that does that yet. And I'm just kind of curious how. The only solution I know, which is to kind of keep a kind of warm pool of servers around, which is expensive, but maybe not so expensive because it's just CPUs. So I'm just like, you know. Yeah.Browsers as a Core Primitive in AI InfrastructurePaul [00:16:36]: You nailed it, right? I mean, how do you offer a serverless-like experience with something that is clearly not serverless, right? And the answer is, you need to be able to run... We run many browsers on single nodes. We use Kubernetes at browser base. So we have many pods that are being scheduled. We have to predictably schedule them up or down. Yes, thousands of browsers in milliseconds is the best case scenario. If you hit us with 10,000 requests, you may hit a slower cold start, right? So we've done a lot of work on predictive scaling and being able to kind of route stuff to different regions where we have multiple regions of browser base where we have different pools available. You can also pick the region you want to go to based on like lower latency, round trip, time latency. It's very important with these types of things. There's a lot of requests going over the wire. So for us, like having a VM like Firecracker powering everything under the hood allows us to be super nimble and spin things up or down really quickly with strong multi-tenancy. But in the end, this is like the complex infrastructural challenges that we have to kind of deal with at browser base. And we have a lot more stuff on our roadmap to allow customers to have more levers to pull to exchange, do you want really fast browser startup times or do you want really low costs? And if you're willing to be more flexible on that, we may be able to kind of like work better for your use cases.swyx [00:17:44]: Since you used Firecracker, shouldn't Fargate do that for you or did you have to go lower level than that? We had to go lower level than that.Paul [00:17:51]: I find this a lot with Fargate customers, which is alarming for Fargate. We used to be a giant Fargate customer. Actually, the first version of browser base was ECS and Fargate. And unfortunately, it's a great product. I think we were actually the largest Fargate customer in our region for a little while. No, what? Yeah, seriously. And unfortunately, it's a great product, but I think if you're an infrastructure company, you actually have to have a deeper level of control over these primitives. I think it's the same thing is true with databases. We've used other database providers and I think-swyx [00:18:21]: Yeah, serverless Postgres.Paul [00:18:23]: Shocker. When you're an infrastructure company, you're on the hook if any provider has an outage. And I can't tell my customers like, hey, we went down because so-and-so went down. That's not acceptable. So for us, we've really moved to bringing things internally. It's kind of opposite of what we preach. We tell our customers, don't build this in-house, but then we're like, we build a lot of stuff in-house. But I think it just really depends on what is in the critical path. We try and have deep ownership of that.Alessio [00:18:46]: On the distributed location side, how does that work for the web where you might get sort of different content in different locations, but the customer is expecting, you know, if you're in the US, I'm expecting the US version. But if you're spinning up my browser in France, I might get the French version. Yeah.Paul [00:19:02]: Yeah. That's a good question. Well, generally, like on the localization, there is a thing called locale in the browser. You can set like what your locale is. If you're like in the ENUS browser or not, but some things do IP, IP based routing. And in that case, you may want to have a proxy. Like let's say you're running something in the, in Europe, but you want to make sure you're showing up from the US. You may want to use one of our proxy features so you can turn on proxies to say like, make sure these connections always come from the United States, which is necessary too, because when you're browsing the web, you're coming from like a, you know, data center IP, and that can make things a lot harder to browse web. So we do have kind of like this proxy super network. Yeah. We have a proxy for you based on where you're going, so you can reliably automate the web. But if you get scheduled in Europe, that doesn't happen as much. We try and schedule you as close to, you know, your origin that you're trying to go to. But generally you have control over the regions you can put your browsers in. So you can specify West one or East one or Europe. We only have one region of Europe right now, actually. Yeah.Alessio [00:19:55]: What's harder, the browser or the proxy? I feel like to me, it feels like actually proxying reliably at scale. It's much harder than spending up browsers at scale. I'm curious. It's all hard.Paul [00:20:06]: It's layers of hard, right? Yeah. I think it's different levels of hard. I think the thing with the proxy infrastructure is that we work with many different web proxy providers and some are better than others. Some have good days, some have bad days. And our customers who've built browser infrastructure on their own, they have to go and deal with sketchy actors. Like first they figure out their own browser infrastructure and then they got to go buy a proxy. And it's like you can pay in Bitcoin and it just kind of feels a little sus, right? It's like you're buying drugs when you're trying to get a proxy online. We have like deep relationships with these counterparties. We're able to audit them and say, is this proxy being sourced ethically? Like it's not running on someone's TV somewhere. Is it free range? Yeah. Free range organic proxies, right? Right. We do a level of diligence. We're SOC 2. So we have to understand what is going on here. But then we're able to make sure that like we route around proxy providers not working. There's proxy providers who will just, the proxy will stop working all of a sudden. And then if you don't have redundant proxying on your own browsers, that's hard down for you or you may get some serious impacts there. With us, like we intelligently know, hey, this proxy is not working. Let's go to this one. And you can kind of build a network of multiple providers to really guarantee the best uptime for our customers. Yeah. So you don't own any proxies? We don't own any proxies. You're right. The team has been saying who wants to like take home a little proxy server, but not yet. We're not there yet. You know?swyx [00:21:25]: It's a very mature market. I don't think you should build that yourself. Like you should just be a super customer of them. Yeah. Scraping, I think, is the main use case for that. I guess. Well, that leads us into CAPTCHAs and also off, but let's talk about CAPTCHAs. You had a little spiel that you wanted to talk about CAPTCHA stuff.Challenges of Scaling Browser InfrastructurePaul [00:21:43]: Oh, yeah. I was just, I think a lot of people ask, if you're thinking about proxies, you're thinking about CAPTCHAs too. I think it's the same thing. You can go buy CAPTCHA solvers online, but it's the same buying experience. It's some sketchy website, you have to integrate it. It's not fun to buy these things and you can't really trust that the docs are bad. What Browserbase does is we integrate a bunch of different CAPTCHAs. We do some stuff in-house, but generally we just integrate with a bunch of known vendors and continually monitor and maintain these things and say, is this working or not? Can we route around it or not? These are CAPTCHA solvers. CAPTCHA solvers, yeah. Not CAPTCHA providers, CAPTCHA solvers. Yeah, sorry. CAPTCHA solvers. We really try and make sure all of that works for you. I think as a dev, if I'm buying infrastructure, I want it all to work all the time and it's important for us to provide that experience by making sure everything does work and monitoring it on our own. Yeah. Right now, the world of CAPTCHAs is tricky. I think AI agents in particular are very much ahead of the internet infrastructure. CAPTCHAs are designed to block all types of bots, but there are now good bots and bad bots. I think in the future, CAPTCHAs will be able to identify who a good bot is, hopefully via some sort of KYC. For us, we've been very lucky. We have very little to no known abuse of Browserbase because we really look into who we work with. And for certain types of CAPTCHA solving, we only allow them on certain types of plans because we want to make sure that we can know what people are doing, what their use cases are. And that's really allowed us to try and be an arbiter of good bots, which is our long term goal. I want to build great relationships with people like Cloudflare so we can agree, hey, here are these acceptable bots. We'll identify them for you and make sure we flag when they come to your website. This is a good bot, you know?Alessio [00:23:23]: I see. And Cloudflare said they want to do more of this. So they're going to set by default, if they think you're an AI bot, they're going to reject. I'm curious if you think this is something that is going to be at the browser level or I mean, the DNS level with Cloudflare seems more where it should belong. But I'm curious how you think about it.Paul [00:23:40]: I think the web's going to change. You know, I think that the Internet as we have it right now is going to change. And we all need to just accept that the cat is out of the bag. And instead of kind of like wishing the Internet was like it was in the 2000s, we can have free content line that wouldn't be scraped. It's just it's not going to happen. And instead, we should think about like, one, how can we change? How can we change the models of, you know, information being published online so people can adequately commercialize it? But two, how do we rebuild applications that expect that AI agents are going to log in on their behalf? Those are the things that are going to allow us to kind of like identify good and bad bots. And I think the team at Clerk has been doing a really good job with this on the authentication side. I actually think that auth is the biggest thing that will prevent agents from accessing stuff, not captchas. And I think there will be agent auth in the future. I don't know if it's going to happen from an individual company, but actually authentication providers that have a, you know, hidden login as agent feature, which will then you put in your email, you'll get a push notification, say like, hey, your browser-based agent wants to log into your Airbnb. You can approve that and then the agent can proceed. That really circumvents the need for captchas or logging in as you and sharing your password. I think agent auth is going to be one way we identify good bots going forward. And I think a lot of this captcha solving stuff is really short-term problems as the internet kind of reorients itself around how it's going to work with agents browsing the web, just like people do. Yeah.Managing Distributed Browser Locations and Proxiesswyx [00:24:59]: Stitch recently was on Hacker News for talking about agent experience, AX, which is a thing that Netlify is also trying to clone and coin and talk about. And we've talked about this on our previous episodes before in a sense that I actually think that's like maybe the only part of the tech stack that needs to be kind of reinvented for agents. Everything else can stay the same, CLIs, APIs, whatever. But auth, yeah, we need agent auth. And it's mostly like short-lived, like it should not, it should be a distinct, identity from the human, but paired. I almost think like in the same way that every social network should have your main profile and then your alt accounts or your Finsta, it's almost like, you know, every, every human token should be paired with the agent token and the agent token can go and do stuff on behalf of the human token, but not be presumed to be the human. Yeah.Paul [00:25:48]: It's like, it's, it's actually very similar to OAuth is what I'm thinking. And, you know, Thread from Stitch is an investor, Colin from Clerk, Octaventures, all investors in browser-based because like, I hope they solve this because they'll make browser-based submission more possible. So we don't have to overcome all these hurdles, but I think it will be an OAuth-like flow where an agent will ask to log in as you, you'll approve the scopes. Like it can book an apartment on Airbnb, but it can't like message anybody. And then, you know, the agent will have some sort of like role-based access control within an application. Yeah. I'm excited for that.swyx [00:26:16]: The tricky part is just, there's one, one layer of delegation here, which is like, you're authoring my user's user or something like that. I don't know if that's tricky or not. Does that make sense? Yeah.Paul [00:26:25]: You know, actually at Twilio, I worked on the login identity and access. Management teams, right? So like I built Twilio's login page.swyx [00:26:31]: You were an intern on that team and then you became the lead in two years? Yeah.Paul [00:26:34]: Yeah. I started as an intern in 2016 and then I was the tech lead of that team. How? That's not normal. I didn't have a life. He's not normal. Look at this guy. I didn't have a girlfriend. I just loved my job. I don't know. I applied to 500 internships for my first job and I got rejected from every single one of them except for Twilio and then eventually Amazon. And they took a shot on me and like, I was getting paid money to write code, which was my dream. Yeah. Yeah. I'm very lucky that like this coding thing worked out because I was going to be doing it regardless. And yeah, I was able to kind of spend a lot of time on a team that was growing at a company that was growing. So it informed a lot of this stuff here. I think these are problems that have been solved with like the SAML protocol with SSO. I think it's a really interesting stuff with like WebAuthn, like these different types of authentication, like schemes that you can use to authenticate people. The tooling is all there. It just needs to be tweaked a little bit to work for agents. And I think the fact that there are companies that are already. Providing authentication as a service really sets it up. Well, the thing that's hard is like reinventing the internet for agents. We don't want to rebuild the internet. That's an impossible task. And I think people often say like, well, we'll have this second layer of APIs built for agents. I'm like, we will for the top use cases, but instead of we can just tweak the internet as is, which is on the authentication side, I think we're going to be the dumb ones going forward. Unfortunately, I think AI is going to be able to do a lot of the tasks that we do online, which means that it will be able to go to websites, click buttons on our behalf and log in on our behalf too. So with this kind of like web agent future happening, I think with some small structural changes, like you said, it feels like it could all slot in really nicely with the existing internet.Handling CAPTCHAs and Agent Authenticationswyx [00:28:08]: There's one more thing, which is the, your live view iframe, which lets you take, take control. Yeah. Obviously very key for operator now, but like, was, is there anything interesting technically there or that the people like, well, people always want this.Paul [00:28:21]: It was really hard to build, you know, like, so, okay. Headless browsers, you don't see them, right. They're running. They're running in a cloud somewhere. You can't like look at them. And I just want to really make, it's a weird name. I wish we came up with a better name for this thing, but you can't see them. Right. But customers don't trust AI agents, right. At least the first pass. So what we do with our live view is that, you know, when you use browser base, you can actually embed a live view of the browser running in the cloud for your customer to see it working. And that's what the first reason is the build trust, like, okay, so I have this script. That's going to go automate a website. I can embed it into my web application via an iframe and my customer can watch. I think. And then we added two way communication. So now not only can you watch the browser kind of being operated by AI, if you want to pause and actually click around type within this iframe that's controlling a browser, that's also possible. And this is all thanks to some of the lower level protocol, which is called the Chrome DevTools protocol. It has a API called start screencast, and you can also send mouse clicks and button clicks to a remote browser. And this is all embeddable within iframes. You have a browser within a browser, yo. And then you simulate the screen, the click on the other side. Exactly. And this is really nice often for, like, let's say, a capture that can't be solved. You saw this with Operator, you know, Operator actually uses a different approach. They use VNC. So, you know, you're able to see, like, you're seeing the whole window here. What we're doing is something a little lower level with the Chrome DevTools protocol. It's just PNGs being streamed over the wire. But the same thing is true, right? Like, hey, I'm running a window. Pause. Can you do something in this window? Human. Okay, great. Resume. Like sometimes 2FA tokens. Like if you get that text message, you might need a person to type that in. Web agents need human-in-the-loop type workflows still. You still need a person to interact with the browser. And building a UI to proxy that is kind of hard. You may as well just show them the whole browser and say, hey, can you finish this up for me? And then let the AI proceed on afterwards. Is there a future where I stream my current desktop to browser base? I don't think so. I think we're very much cloud infrastructure. Yeah. You know, but I think a lot of the stuff we're doing, we do want to, like, build tools. Like, you know, we'll talk about the stage and, you know, web agent framework in a second. But, like, there's a case where a lot of people are going desktop first for, you know, consumer use. And I think cloud is doing a lot of this, where I expect to see, you know, MCPs really oriented around the cloud desktop app for a reason, right? Like, I think a lot of these tools are going to run on your computer because it makes... I think it's breaking out. People are putting it on a server. Oh, really? Okay. Well, sweet. We'll see. We'll see that. I was surprised, though, wasn't I? I think that the browser company, too, with Dia Browser, it runs on your machine. You know, it's going to be...swyx [00:30:50]: What is it?Paul [00:30:51]: So, Dia Browser, as far as I understand... I used to use Arc. Yeah. I haven't used Arc. But I'm a big fan of the browser company. I think they're doing a lot of cool stuff in consumer. As far as I understand, it's a browser where you have a sidebar where you can, like, chat with it and it can control the local browser on your machine. So, if you imagine, like, what a consumer web agent is, which it lives alongside your browser, I think Google Chrome has Project Marina, I think. I almost call it Project Marinara for some reason. I don't know why. It's...swyx [00:31:17]: No, I think it's someone really likes the Waterworld. Oh, I see. The classic Kevin Costner. Yeah.Paul [00:31:22]: Okay. Project Marinara is a similar thing to the Dia Browser, in my mind, as far as I understand it. You have a browser that has an AI interface that will take over your mouse and keyboard and control the browser for you. Great for consumer use cases. But if you're building applications that rely on a browser and it's more part of a greater, like, AI app experience, you probably need something that's more like infrastructure, not a consumer app.swyx [00:31:44]: Just because I have explored a little bit in this area, do people want branching? So, I have the state. Of whatever my browser's in. And then I want, like, 100 clones of this state. Do people do that? Or...Paul [00:31:56]: People don't do it currently. Yeah. But it's definitely something we're thinking about. I think the idea of forking a browser is really cool. Technically, kind of hard. We're starting to see this in code execution, where people are, like, forking some, like, code execution, like, processes or forking some tool calls or branching tool calls. Haven't seen it at the browser level yet. But it makes sense. Like, if an AI agent is, like, using a website and it's not sure what path it wants to take to crawl this website. To find the information it's looking for. It would make sense for it to explore both paths in parallel. And that'd be a very, like... A road not taken. Yeah. And hopefully find the right answer. And then say, okay, this was actually the right one. And memorize that. And go there in the future. On the roadmap. For sure. Don't make my roadmap, please. You know?Alessio [00:32:37]: How do you actually do that? Yeah. How do you fork? I feel like the browser is so stateful for so many things.swyx [00:32:42]: Serialize the state. Restore the state. I don't know.Paul [00:32:44]: So, it's one of the reasons why we haven't done it yet. It's hard. You know? Like, to truly fork, it's actually quite difficult. The naive way is to open the same page in a new tab and then, like, hope that it's at the same thing. But if you have a form halfway filled, you may have to, like, take the whole, you know, container. Pause it. All the memory. Duplicate it. Restart it from there. It could be very slow. So, we haven't found a thing. Like, the easy thing to fork is just, like, copy the page object. You know? But I think there needs to be something a little bit more robust there. Yeah.swyx [00:33:12]: So, MorphLabs has this infinite branch thing. Like, wrote a custom fork of Linux or something that let them save the system state and clone it. MorphLabs, hit me up. I'll be a customer. Yeah. That's the only. I think that's the only way to do it. Yeah. Like, unless Chrome has some special API for you. Yeah.Paul [00:33:29]: There's probably something we'll reverse engineer one day. I don't know. Yeah.Alessio [00:33:32]: Let's talk about StageHand, the AI web browsing framework. You have three core components, Observe, Extract, and Act. Pretty clean landing page. What was the idea behind making a framework? Yeah.Stagehand: AI web browsing frameworkPaul [00:33:43]: So, there's three frameworks that are very popular or already exist, right? Puppeteer, Playwright, Selenium. Those are for building hard-coded scripts to control websites. And as soon as I started to play with LLMs plus browsing, I caught myself, you know, code-genning Playwright code to control a website. I would, like, take the DOM. I'd pass it to an LLM. I'd say, can you generate the Playwright code to click the appropriate button here? And it would do that. And I was like, this really should be part of the frameworks themselves. And I became really obsessed with SDKs that take natural language as part of, like, the API input. And that's what StageHand is. StageHand exposes three APIs, and it's a super set of Playwright. So, if you go to a page, you may want to take an action, click on the button, fill in the form, etc. That's what the act command is for. You may want to extract some data. This one takes a natural language, like, extract the winner of the Super Bowl from this page. You can give it a Zod schema, so it returns a structured output. And then maybe you're building an API. You can do an agent loop, and you want to kind of see what actions are possible on this page before taking one. You can do observe. So, you can observe the actions on the page, and it will generate a list of actions. You can guide it, like, give me actions on this page related to buying an item. And you can, like, buy it now, add to cart, view shipping options, and pass that to an LLM, an agent loop, to say, what's the appropriate action given this high-level goal? So, StageHand isn't a web agent. It's a framework for building web agents. And we think that agent loops are actually pretty close to the application layer because every application probably has different goals or different ways it wants to take steps. I don't think I've seen a generic. Maybe you guys are the experts here. I haven't seen, like, a really good AI agent framework here. Everyone kind of has their own special sauce, right? I see a lot of developers building their own agent loops, and they're using tools. And I view StageHand as the browser tool. So, we expose act, extract, observe. Your agent can call these tools. And from that, you don't have to worry about it. You don't have to worry about generating playwright code performantly. You don't have to worry about running it. You can kind of just integrate these three tool calls into your agent loop and reliably automate the web.swyx [00:35:48]: A special shout-out to Anirudh, who I met at your dinner, who I think listens to the pod. Yeah. Hey, Anirudh.Paul [00:35:54]: Anirudh's a man. He's a StageHand guy.swyx [00:35:56]: I mean, the interesting thing about each of these APIs is they're kind of each startup. Like, specifically extract, you know, Firecrawler is extract. There's, like, Expand AI. There's a whole bunch of, like, extract companies. They just focus on extract. I'm curious. Like, I feel like you guys are going to collide at some point. Like, right now, it's friendly. Everyone's in a blue ocean. At some point, it's going to be valuable enough that there's some turf battle here. I don't think you have a dog in a fight. I think you can mock extract to use an external service if they're better at it than you. But it's just an observation that, like, in the same way that I see each option, each checkbox in the side of custom GBTs becoming a startup or each box in the Karpathy chart being a startup. Like, this is also becoming a thing. Yeah.Paul [00:36:41]: I mean, like, so the way StageHand works is that it's MIT-licensed, completely open source. You bring your own API key to your LLM of choice. You could choose your LLM. We don't make any money off of the extract or really. We only really make money if you choose to run it with our browser. You don't have to. You can actually use your own browser, a local browser. You know, StageHand is completely open source for that reason. And, yeah, like, I think if you're building really complex web scraping workflows, I don't know if StageHand is the tool for you. I think it's really more if you're building an AI agent that needs a few general tools or if it's doing a lot of, like, web automation-intensive work. But if you're building a scraping company, StageHand is not your thing. You probably want something that's going to, like, get HTML content, you know, convert that to Markdown, query it. That's not what StageHand does. StageHand is more about reliability. I think we focus a lot on reliability and less so on cost optimization and speed at this point.swyx [00:37:33]: I actually feel like StageHand, so the way that StageHand works, it's like, you know, page.act, click on the quick start. Yeah. It's kind of the integration test for the code that you would have to write anyway, like the Puppeteer code that you have to write anyway. And when the page structure changes, because it always does, then this is still the test. This is still the test that I would have to write. Yeah. So it's kind of like a testing framework that doesn't need implementation detail.Paul [00:37:56]: Well, yeah. I mean, Puppeteer, Playwright, and Slenderman were all designed as testing frameworks, right? Yeah. And now people are, like, hacking them together to automate the web. I would say, and, like, maybe this is, like, me being too specific. But, like, when I write tests, if the page structure changes. Without me knowing, I want that test to fail. So I don't know if, like, AI, like, regenerating that. Like, people are using StageHand for testing. But it's more for, like, usability testing, not, like, testing of, like, does the front end, like, has it changed or not. Okay. But generally where we've seen people, like, really, like, take off is, like, if they're using, you know, something. If they want to build a feature in their application that's kind of like Operator or Deep Research, they're using StageHand to kind of power that tool calling in their own agent loop. Okay. Cool.swyx [00:38:37]: So let's go into Operator, the first big agent launch of the year from OpenAI. Seems like they have a whole bunch scheduled. You were on break and your phone blew up. What's your just general view of computer use agents is what they're calling it. The overall category before we go into Open Operator, just the overall promise of Operator. I will observe that I tried it once. It was okay. And I never tried it again.OpenAI's Operator and computer use agentsPaul [00:38:58]: That tracks with my experience, too. Like, I'm a huge fan of the OpenAI team. Like, I think that I do not view Operator as the company. I'm not a company killer for browser base at all. I think it actually shows people what's possible. I think, like, computer use models make a lot of sense. And I'm actually most excited about computer use models is, like, their ability to, like, really take screenshots and reasoning and output steps. I think that using mouse click or mouse coordinates, I've seen that proved to be less reliable than I would like. And I just wonder if that's the right form factor. What we've done with our framework is anchor it to the DOM itself, anchor it to the actual item. So, like, if it's clicking on something, it's clicking on that thing, you know? Like, it's more accurate. No matter where it is. Yeah, exactly. Because it really ties in nicely. And it can handle, like, the whole viewport in one go, whereas, like, Operator can only handle what it sees. Can you hover? Is hovering a thing that you can do? I don't know if we expose it as a tool directly, but I'm sure there's, like, an API for hovering. Like, move mouse to this position. Yeah, yeah, yeah. I think you can trigger hover, like, via, like, the JavaScript on the DOM itself. But, no, I think, like, when we saw computer use, everyone's eyes lit up because they realized, like, wow, like, AI is going to actually automate work for people. And I think seeing that kind of happen from both of the labs, and I'm sure we're going to see more labs launch computer use models, I'm excited to see all the stuff that people build with it. I think that I'd love to see computer use power, like, controlling a browser on browser base. And I think, like, Open Operator, which was, like, our open source version of OpenAI's Operator, was our first take on, like, how can we integrate these models into browser base? And we handle the infrastructure and let the labs do the models. I don't have a sense that Operator will be released as an API. I don't know. Maybe it will. I'm curious to see how well that works because I think it's going to be really hard for a company like OpenAI to do things like support CAPTCHA solving or, like, have proxies. Like, I think it's hard for them structurally. Imagine this New York Times headline, OpenAI CAPTCHA solving. Like, that would be a pretty bad headline, this New York Times headline. Browser base solves CAPTCHAs. No one cares. No one cares. And, like, our investors are bored. Like, we're all okay with this, you know? We're building this company knowing that the CAPTCHA solving is short-lived until we figure out how to authenticate good bots. I think it's really hard for a company like OpenAI, who has this brand that's so, so good, to balance with, like, the icky parts of web automation, which it can be kind of complex to solve. I'm sure OpenAI knows who to call whenever they need you. Yeah, right. I'm sure they'll have a great partnership.Alessio [00:41:23]: And is Open Operator just, like, a marketing thing for you? Like, how do you think about resource allocation? So, you can spin this up very quickly. And now there's all this, like, open deep research, just open all these things that people are building. We started it, you know. You're the original Open. We're the original Open operator, you know? Is it just, hey, look, this is a demo, but, like, we'll help you build out an actual product for yourself? Like, are you interested in going more of a product route? That's kind of the OpenAI way, right? They started as a model provider and then…Paul [00:41:53]: Yeah, we're not interested in going the product route yet. I view Open Operator as a model provider. It's a reference project, you know? Let's show people how to build these things using the infrastructure and models that are out there. And that's what it is. It's, like, Open Operator is very simple. It's an agent loop. It says, like, take a high-level goal, break it down into steps, use tool calling to accomplish those steps. It takes screenshots and feeds those screenshots into an LLM with the step to generate the right action. It uses stagehand under the hood to actually execute this action. It doesn't use a computer use model. And it, like, has a nice interface using the live view that we talked about, the iframe, to embed that into an application. So I felt like people on launch day wanted to figure out how to build their own version of this. And we turned that around really quickly to show them. And I hope we do that with other things like deep research. We don't have a deep research launch yet. I think David from AOMNI actually has an amazing open deep research that he launched. It has, like, 10K GitHub stars now. So he's crushing that. But I think if people want to build these features natively into their application, they need good reference projects. And I think Open Operator is a good example of that.swyx [00:42:52]: I don't know. Actually, I'm actually pretty bullish on API-driven operator. Because that's the only way that you can sort of, like, once it's reliable enough, obviously. And now we're nowhere near. But, like, give it five years. It'll happen, you know. And then you can sort of spin this up and browsers are working in the background and you don't necessarily have to know. And it just is booking restaurants for you, whatever. I can definitely see that future happening. I had this on the landing page here. This might be a slightly out of order. But, you know, you have, like, sort of three use cases for browser base. Open Operator. Or this is the operator sort of use case. It's kind of like the workflow automation use case. And it completes with UiPath in the sort of RPA category. Would you agree with that? Yeah, I would agree with that. And then there's Agents we talked about already. And web scraping, which I imagine would be the bulk of your workload right now, right?Paul [00:43:40]: No, not at all. I'd say actually, like, the majority is browser automation. We're kind of expensive for web scraping. Like, I think that if you're building a web scraping product, if you need to do occasional web scraping or you have to do web scraping that works every single time, you want to use browser automation. Yeah. You want to use browser-based. But if you're building web scraping workflows, what you should do is have a waterfall. You should have the first request is a curl to the website. See if you can get it without even using a browser. And then the second request may be, like, a scraping-specific API. There's, like, a thousand scraping APIs out there that you can use to try and get data. Scraping B. Scraping B is a great example, right? Yeah. And then, like, if those two don't work, bring out the heavy hitter. Like, browser-based will 100% work, right? It will load the page in a real browser, hydrate it. I see.swyx [00:44:21]: Because a lot of people don't render to JS.swyx [00:44:25]: Yeah, exactly.Paul [00:44:26]: So, I mean, the three big use cases, right? Like, you know, automation, web data collection, and then, you know, if you're building anything agentic that needs, like, a browser tool, you want to use browser-based.Alessio [00:44:35]: Is there any use case that, like, you were super surprised by that people might not even think about? Oh, yeah. Or is it, yeah, anything that you can share? The long tail is crazy. Yeah.Surprising use cases of BrowserbasePaul [00:44:44]: One of the case studies on our website that I think is the most interesting is this company called Benny. So, the way that it works is if you're on food stamps in the United States, you can actually get rebates if you buy certain things. Yeah. You buy some vegetables. You submit your receipt to the government. They'll give you a little rebate back. Say, hey, thanks for buying vegetables. It's good for you. That process of submitting that receipt is very painful. And the way Benny works is you use their app to take a photo of your receipt, and then Benny will go submit that receipt for you and then deposit the money into your account. That's actually using no AI at all. It's all, like, hard-coded scripts. They maintain the scripts. They've been doing a great job. And they build this amazing consumer app. But it's an example of, like, all these, like, tedious workflows that people have to do to kind of go about their business. And they're doing it for the sake of their day-to-day lives. And I had never known about, like, food stamp rebates or the complex forms you have to do to fill them. But the world is powered by millions and millions of tedious forms, visas. You know, Emirate Lighthouse is a customer, right? You know, they do the O1 visa. Millions and millions of forms are taking away humans' time. And I hope that Browserbase can help power software that automates away the web forms that we don't need anymore. Yeah.swyx [00:45:49]: I mean, I'm very supportive of that. I mean, forms. I do think, like, government itself is a big part of it. I think the government itself should embrace AI more to do more sort of human-friendly form filling. Mm-hmm. But I'm not optimistic. I'm not holding my breath. Yeah. We'll see. Okay. I think I'm about to zoom out. I have a little brief thing on computer use, and then we can talk about founder stuff, which is, I tend to think of developer tooling markets in impossible triangles, where everyone starts in a niche, and then they start to branch out. So I already hinted at a little bit of this, right? We mentioned more. We mentioned E2B. We mentioned Firecrawl. And then there's Browserbase. So there's, like, all this stuff of, like, have serverless virtual computer that you give to an agent and let them do stuff with it. And there's various ways of connecting it to the internet. You can just connect to a search API, like SERP API, whatever other, like, EXA is another one. That's what you're searching. You can also have a JSON markdown extractor, which is Firecrawl. Or you can have a virtual browser like Browserbase, or you can have a virtual machine like Morph. And then there's also maybe, like, a virtual sort of code environment, like Code Interpreter. So, like, there's just, like, a bunch of different ways to tackle the problem of give a computer to an agent. And I'm just kind of wondering if you see, like, everyone's just, like, happily coexisting in their respective niches. And as a developer, I just go and pick, like, a shopping basket of one of each. Or do you think that you eventually, people will collide?Future of browser automation and market competitionPaul [00:47:18]: I think that currently it's not a zero-sum market. Like, I think we're talking about... I think we're talking about all of knowledge work that people do that can be automated online. All of these, like, trillions of hours that happen online where people are working. And I think that there's so much software to be built that, like, I tend not to think about how these companies will collide. I just try to solve the problem as best as I can and make this specific piece of infrastructure, which I think is an important primitive, the best I possibly can. And yeah. I think there's players that are actually going to like it. I think there's players that are going to launch, like, over-the-top, you know, platforms, like agent platforms that have all these tools built in, right? Like, who's building the rippling for agent tools that has the search tool, the browser tool, the operating system tool, right? There are some. There are some. There are some, right? And I think in the end, what I have seen as my time as a developer, and I look at all the favorite tools that I have, is that, like, for tools and primitives with sufficient levels of complexity, you need to have a solution that's really bespoke to that primitive, you know? And I am sufficiently convinced that the browser is complex enough to deserve a primitive. Obviously, I have to. I'm the founder of BrowserBase, right? I'm talking my book. But, like, I think maybe I can give you one spicy take against, like, maybe just whole OS running. I think that when I look at computer use when it first came out, I saw that the majority of use cases for computer use were controlling a browser. And do we really need to run an entire operating system just to control a browser? I don't think so. I don't think that's necessary. You know, BrowserBase can run browsers for way cheaper than you can if you're running a full-fledged OS with a GUI, you know, operating system. And I think that's just an advantage of the browser. It is, like, browsers are little OSs, and you can run them very efficiently if you orchestrate it well. And I think that allows us to offer 90% of the, you know, functionality in the platform needed at 10% of the cost of running a full OS. Yeah.Open Operator: Browserbase's Open-Source Alternativeswyx [00:49:16]: I definitely see the logic in that. There's a Mark Andreessen quote. I don't know if you know this one. Where he basically observed that the browser is turning the operating system into a poorly debugged set of device drivers, because most of the apps are moved from the OS to the browser. So you can just run browsers.Paul [00:49:31]: There's a place for OSs, too. Like, I think that there are some applications that only run on Windows operating systems. And Eric from pig.dev in this upcoming YC batch, or last YC batch, like, he's building all run tons of Windows operating systems for you to control with your agent. And like, there's some legacy EHR systems that only run on Internet-controlled systems. Yeah.Paul [00:49:54]: I think that's it. I think, like, there are use cases for specific operating systems for specific legacy software. And like, I'm excited to see what he does with that. I just wanted to give a shout out to the pig.dev website.swyx [00:50:06]: The pigs jump when you click on them. Yeah. That's great.Paul [00:50:08]: Eric, he's the former co-founder of banana.dev, too.swyx [00:50:11]: Oh, that Eric. Yeah. That Eric. Okay. Well, he abandoned bananas for pigs. I hope he doesn't start going around with pigs now.Alessio [00:50:18]: Like he was going around with bananas. A little toy pig. Yeah. Yeah. I love that. What else are we missing? I think we covered a lot of, like, the browser-based product history, but. What do you wish people asked you? Yeah.Paul [00:50:29]: I wish people asked me more about, like, what will the future of software look like? Because I think that's really where I've spent a lot of time about why do browser-based. Like, for me, starting a company is like a means of last resort. Like, you shouldn't start a company unless you absolutely have to. And I remain convinced that the future of software is software that you're going to click a button and it's going to do stuff on your behalf. Right now, software. You click a button and it maybe, like, calls it back an API and, like, computes some numbers. It, like, modifies some text, whatever. But the future of software is software using software. So, I may log into my accounting website for my business, click a button, and it's going to go load up my Gmail, search my emails, find the thing, upload the receipt, and then comment it for me. Right? And it may use it using APIs, maybe a browser. I don't know. I think it's a little bit of both. But that's completely different from how we've built software so far. And that's. I think that future of software has different infrastructure requirements. It's going to require different UIs. It's going to require different pieces of infrastructure. I think the browser infrastructure is one piece that fits into that, along with all the other categories you mentioned. So, I think that it's going to require developers to think differently about how they've built software for, you know
Die parlementêre staande komitee vir natuurlike hulpbronne het 'n verslag oor onwettige litiummynbou in Uis in die Erongo-streek vrygestel, na 'n mosie deur die voormalige parlementslid Henny Seibeb in April 2024. Die verslag beklemtoon wydverspreide ongemagtigde mynbou as gevolg van swak regulering en buitelandse betrokkenheid. Baie mynwerkers werk sonder lisensies of omgewingsklarings, wat lei tot 'n inkomsteverlies vir die regering, onbillike mededinging vir wettige ondernemings en die uitbuiting van plaaslike werkers sonder veiligheidsvoorwaardes. Daar is ernstige omgewingsbeskadiging, insluitend gronddegradasie, ontbossing en waterbesoedeling. Die verslag vra vir strenger afdwinging van wette, gemeenskapsbetrokkenheid en volhoubare mynbeleide om die toenemende kwessies aan te spreek. Kosmos 94.1 Nuus het met die voormalige LPM-parlementslid Henny Seibeb gepraat, wat sê die verslag is onvolledig.
In this episode, I was joined by Jérôme Laban, the CTO of the Uno Platform. We chatted about the Uno Platform itself, which is a cross-platform framework for building single-codebase applications that run on Windows, iOS, Android, macOS, Linux, and the web via WebAssembly. We also discussed a new Hot Design feature, which is a designer/builder that during development, becomes part your application at runtime, so you can live build/edit your UIs with an easy drag and drop interface, whilst your application is running! Given you're building your UI at runtime, the designer UI also has access to your real data and its properties for databinding, and renders them as you edit!Jérôme Laban is the Chief Technology Officer at Uno Platform. With over two decades of experience in software development, Jérôme is a recognized expert in cross-platform development, .NET technologies, and has been awarded Microsoft MVP award for many years. At Uno Platform, Jerome leads the technical vision and development of the framework, empowering developers worldwide to create rich, performant applications across all platforms. A passionate advocate for open-source and community-driven development, Jerome frequently shares his insights at global conferences, webinars, and on Twitch.For a full list of show notes, or to add comments - please see the website here
This episode is sponsored by IAMONES. Visit https://iamones.ai/idac to learn more.In this sponsored episode of the Identity at the Center podcast, Jim welcomes Andrea Rossi from IAMONES, an innovative conversational identity governance platform. They discuss the revolutionary approach IAMONES takes in simplifying identity and access management (IAM) using large language models (LLM). Andrea explains how IAMONES aims to eliminate the need for complex roles and middle layers by providing business users with direct and comprehensible access to system functions. The discussion dives into the practical applications of LLM in enhancing existing IAM systems, particularly focusing on making permissions and entitlements more understandable and manageable for business users, auditors, and administrators. The episode also highlights the ease of integrating IAMONES with existing identity infrastructures and offers insights into reducing the burden of maintaining multilingual UIs. Tune in to learn more about the potential of AI in transforming IAM.Chapters00:00 Introduction to Simplifying Access Management02:11 Welcome to the Identity at the Center Podcast02:23 Sponsor Spotlight: Andrea Rossi from IAMONES05:04 The Story Behind the Name 'IAMONES'08:16 Conversational Identity and Large Language Models12:35 Revolutionizing IGA with AI17:22 The Future of AI in Identity Management23:08 Enhancing IGA Configuration with Natural Language31:37 Understanding Outcomes in Identity Governance32:09 The Shift from RBAC to PBAC33:35 Challenges with Role Explosion34:02 Introducing Temporal Identity Graph35:27 Simplifying Access for Business Users39:36 Ensuring Proper Data Visibility46:06 Implementing the Identity Gateway48:45 Customer Feedback and Success Metrics52:07 Future of AI in Identity Management56:21 Travel Tips for Visiting ItalyConnect with Andrea: https://www.linkedin.com/in/arossi67Learn more about IAMONES: https://iamones.ai/idacRamones - Blitzkrieg Bop (Official Music Video): https://www.youtube.com/watch?v=268C3N2dDYkMicrosoft's Satya Nadella on the evolution of SaaS: https://www.youtube.com/watch?v=a_RjOhCkhvQ&t=22sConnect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at idacpodcast.com and watch at https://www.youtube.com/@idacpodcast
In this episode of PodRocket, Dev Agrawal, dev advocate and developer, talks about building efficient asynchronous UIs, the challenges and solutions for handling complex state management, utilizing React and Solid frameworks, and the potential of suspense boundaries and transitions in modern web development. Links https://devagr.me https://github.com/devagrawal09 https://www.linkedin.com/in/dev-agrawal-88449b157 https://medium.com/@devagrawal09 https://www.youtube.com/channel/UCDXzM8ijdxkVA6NbQiQCKag https://x.com/devagrawal09 https://events.codemash.org/2025CodeMashConference#/agendaday=4&lang=en&sessionId=76186000004278631&viewMode=2 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Dev Agrawal.
Mike Hermann accepted the role of Athletic Director at the University of Illinois Springfield in summer 2024. He spoke to Community Voices about his career, how his desire to get back to Division II sports led him to UIS, and his plans for athletics at UIS and in the community. He also shared the successes UIS athletes have in sports and in the classroom.
Topics covered in this episode: Terminals & Shells Winloop: An Alternative library for uvloop compatibility with windows Ruff & uv uv-secure Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Terminals & Shells Ghostty is out Started by Mitchel Hashimoto, one of the co-founders of Hashicorp “Ghostty is a terminal emulator that differentiates itself by being fast, feature-rich, and native. While there are many excellent terminal emulators available, they all force you to choose between speed, features, or native UIs. Ghostty provides all three.” Currently for macOS & Linux (Windows planned) Version 1.0.1 released Dec 31, announced in Oct Features: cross-platform, windows, tabs, and splits, Themes, Ligatures, … Shell Integration: Some Ghostty features require integrating with your shell. Ghostty can automatically inject shell integration for bash, zsh, fish, and elvish. Fish is moving to Rust “fish is a smart and user-friendly command line shell with clever features that just work, without needing an advanced degree in bash scriptology.” “fish 4.0 is a big upgrade. It's got lots of new features to make using the command line easier and more enjoyable, such as more natural key binding and expanded history search. And under the hood, we've rebuilt the foundation in Rust.” Michael #2: Winloop: An Alternative library for uvloop compatibility with windows via Owen Lamont An alternative library for uvloop compatibility with windows . It always felt disappointing when libuv is available for windows but windows was never compatible with uvloop. Brian #3: Ruff & uv Ruff 0.9.0 has a new 2025 style guide f-string formatting improvements Now formats expressions interpolated inside f-string curly braces Quotes normalized according to project config Unnecessary escapes removed Examines interpolated expressions to see if splitting the string over multiple lines is ok Other changes to, but it's the f-string improvements I'm excited about. Python 3.14.0a3 is out, and available with uv uv python install 3.14 --preview Michael #4: uv-secure by Owen Lamont (yes again :) ) This tool will scan PyPi dependencies listed in your uv.lock files (or uv generated requirements.txt files) and check for known vulnerabilities listed against those packages and versions in the PyPi json API. I don't intend uv-secure to ever create virtual environments or do dependency resolution - the plan is to leave that all to uv since it does that so well and just target lock files and fully pinned and dependency resolved requirements.txt files). Works “out of the box” with a requirements.txt from uv pip compile. Extras Brian: Test & Code Season 2: pytest plugins Season 1 was something like 223 episodes over 9.5 years Started the summer of 2015 Send in pytest plugin suggestions to Brian on BlueSky or Mastodon or the contact form at pythontest.com Michael: Episode Deep Dive feature at Talk Python Feedback on social media: Those deep dives look really handy. Yes, those ARE really handy! Thanks for doing that. wow, yes please! This is awesome. Wow, this is amazing. … It helps when going back to check something (without having to re-listen). PyCon Austria at.pycon.org Heavy metal status codes Beautiful Soup feedback CFA via Sumana Harihareswara Joke: That's a stupid cup
Send us a textIn this episode, Sophie Fell, Head of Paid Media at Liberty Marketing Group, shares her journey in digital marketing after motherhood, discussing her transition from managing Facebook posts for a startup to overseeing multi-million-dollar ad campaigns. She emphasizes the value of mentorship, data-driven decision-making, and the challenges faced with top-of-funnel PPC strategies. She highlights the advantages for beginners of being a “t-shaped marketer,” advocating for a blend of broad knowledge of all platforms with deeper specializations. The episode also explores the impact of AI on digital marketing and the evolving role of paid media professionals. Episode Highlights: Sophie built a successful career in PPC and paid media without a formal university degree. She started by managing Facebook posts for a startup and developed her skills through hands-on experience, emphasizing that practical knowledge often outweighs formal education in the marketing industry.Sophie views mentorship as a collaborative relationship built on trust and mutual respect. She credits her mentor, Kevin Ashley, for shaping her career and now mentors others within and outside of her agency, focusing on helping mentees define and achieve their goals.Sophie and Danny share in the belief in the advantages of being a “t-shaped marketer,” possessing a broad understanding across all platforms while having deeper expertise in one or two specific areas as they begin to align with career aspirations. Sophie highlights the critical role of data analysis and optimization in PPC, especially for beginners in the industry. While learning platform UIs is straightforward, understanding metrics, making data-driven decisions, and translating those insights into actions are what set top professionals apart.Many brands fail with top-of-funnel Google Ads campaigns because they focus on the wrong metrics, like direct conversions. Sophie advocates for redefining KPIs to measure engagement and awareness, which can better justify top-of-funnel strategies.Episode Links: Sophie's LinkedInSophie's XSophie's Book: Follow The Digital Marketing Mentor: Website and Blog: thedmmentor.com Instagram: @thedmmentor Linkedin: @thedmmentor YouTube: @thedmmentor Interested in Digital Marketing Services, Careers, or Courses? Check out more from the TDMM Family: Optidge.com - Full Service Digital Marketing Agency specializing in SEO, PPC, Paid Social, and Lead Generation efforts for established B2C and B2B businesses and organizations. ODEOacademy.com - Digital Marketing online education and course platform. ODEO gives you solid digital marketing knowledge to launch/boost your career or understand your business's digital marketing strategy.