POPULARITY
All the bot hype has resulted in some hilariously bad security vulnerabilities. Who could have seen that coming? We'll cover some of the worst. CALL 1-774-462-5667 Boost This Episode: Grab Sats with River!⚡ Strike Makes it Quick it Grab Sats in 100s of Countries Boost with FountainWeb Zap This Episode: Web Zap the Show However you Want!Become a Member:Monthly Jupiter.Party Discount - Only for The Launch!Annual Jupiter.Party Discount - Launch Exclusive! Show Notes: Seahawks fans from around the world descend on Bay Area for Super Bowl 2026 - YouTubePro Football Champion 2026 Predictions & Odds | PolymarketMoltbook: Where AI agents go to socialize with peers - YouTubeReuters: Moltbook social media site for AI agents had major security hole“You all do realize @moltbook is just a REST API and you can literally post anything you want there”RentAHuman.ai - Hire Humans for AI Agents | MCP IntegrationGit & GitHub Crash Course for Beginners [2026] - YouTubeGit and GitHub - Full Course - YouTubeCups and Saucers For All Kinda Creepy Kings Palace in UtahDon't Judge A Book by its Cover in New MexMusic:Row Your Boat: Justin Lantrip
It's an annual tradition, the AVNation "Best of" Awards, and Chi Hang Lo is up for AV Professional of the Year, along with UCLA's Classroom Modernization Pilot, and HETMA for Best Technical Support! Take a listen as Joe Way sits down with Chi to discuss this honor and why you should #VoteForChi.Joe Way drops a special Friday episode to spotlight the AV Nation Awards (Readers' Choice “Best of 2025”) and rally the higher ed community around three finalists: Chi Hang Lo for AV Professional of the Year, UCLA's Classroom Modernization Pilot for Project of the Year, and HETMA for Best Technical Support. Chi joins to share what the nomination represents, why the UCLA pilot is different, and how the higher ed community lifts each other up through collaboration, shared evaluation, and real-world support. The episode closes with a clear call: go vote, support the people and projects pushing the industry forward, and keep building a better future together.Vote now: https://www.avnation.tv/avnation-best-of-2025-awards/Featured GuestChi Hang Lo — Manager, AV/IT Solutions (UCLA)Leads a team designing and delivering scalable AV + IT solutions that support UCLA's learning environments and broader smart campus vision.What You'll Hear in This Episode1) Why this episode, and why nowA bonus Friday release to interrupt the usual schedule and highlight the AV Nation Awards as a uniquely people-driven recognition.Joe frames Readers' Choice as a rare moment for the industry to advocate for the people, projects, and platforms that matter most to the community.2) The three higher ed finalistsChi Hang Lo — AV Professional of the Year finalistUCLA — Project of the Year finalist for the Classroom Modernization PilotHETMA — Technical Support finalistJoe emphasizes how significant it is to see higher ed represented across multiple major categories in the finals.3) The UCLA Classroom Modernization Pilot: what makes it specialChi explains why the pilot stands out as more than a refresh—it's a different way of thinking:Moving from traditional room-by-room AV to a cloud-first, scalable control approach designed for enterprise scale (think: up to 1,000 spaces).Leveraging web technologies, REST APIs, and integrations to enable flexibility, interoperability, and future growth.Building for adaptability so the system isn't locked to one manufacturer ecosystem—prioritizing integration-first design and long-term scalability.Aiming toward a platform approach: “AV as a platform” that can support more than AV control.4) The “why” behind going cloud-firstJoe asks the question everyone asks: why not just keep doing “simple” AV? Chi's answer points to:Preparing the team—and the campus—for the future skill sets needed in modern learning environments.Meeting expanding demands: conferencing, capture, collaboration, active learning, and rapid shifts in pedagogy.Treating AV as part of a broader AV/IT solutions ecosystem, not a standalone technical island.5) Smart campus, not just AVThe conversation expands into the broader vision:AV systems already contain meaningful data (occupancy, environmental signals, usage patterns)—the opportunity is connecting it to the rest of campus.Collaboration across departments (facilities, security, events, transportation, IT, and more) becomes possible when you build a platform that can integrate.Chi shares work toward data aggregation and dashboards, including collaboration with a Data Lake approach to create better operational insight and decision-making.6) The team behind the pilotChi introduces the core members of his team and their contributions:Project coordination and process leadership (including agile/scrum-style development support)Technical design and 2D/3D modeling workflows, standards-based design language for facilities alignmentSoftware/automation engineering, signal distribution/recording, and architecture to connect devices to the cloudPartnerships with manufacturers to improve firmware/APIs and enable deeper integration at scaleJoe underscores how innovation required close collaboration between UCLA, solution providers, and manufacturers—engineering alongside engineering.7) Career growth: from technical expert to leaderJoe shifts the conversation to professional development: what changes when you move from “doing” to “leading.”Chi shares leadership themes that have guided him:Staying humble, collaborative, and relationship-drivenBalancing strong technical conviction with empathy and communicationCreating opportunities for the next generation by helping people navigate common roadblocks (communication, attitude, relationship dynamics)Treating the industry like a community—because you'll keep working with the same people for years8) The HETMA community impactChi shares how community support—especially collaborative technology evaluation and shared learning—helps smaller institutions gain access, influence, and manufacturer attention they might not get alone. Joe reinforces the higher ed ethos: we're collaborators, not competitors.Memorable Moments / Quotes (paraphrased)The awards matter because the people choose—it's advocacy, not just adjudication.The pilot isn't just “AV”—it's building infrastructure for a smart campus platform.The real work is turning AV data into insight and integration that improves the campus experience.Calls to ActionVote for Chi Hang Lo — AV Professional of the YearVote for UCLA's Classroom Modernization Pilot — Project of the YearVote for HETMA — Technical SupportAnd vote for the products, people, and projects you believe represent the best of 2025.Vote now: https://www.avnation.tv/avnation-best-of-2025-awards/Connect with Chi Hang Lo: https://www.linkedin.com/in/chihanglo/
An airhacks.fm conversation with Alvaro Hernandez (@ahachete) about: discussion about LLMs generating Java code with BCE patterns and architectural rules, Java being 20-30% better for LLM code generation than python and typescript, embedding business knowledge in Java source code for LLM context, stackgres as a curated opinionated stack for running Postgres on kubernetes, Postgres requiring external tools for connection pooling and high availability and backup and monitoring, StackGres as a Helm package and Kubernetes operator, comparison with oxide hardware for on-premise cloud environments, experimenting with Incus for system containers and VMS, limitations of Ansible for infrastructure automation and code reuse, Kubernetes as an API-driven architecture abstracting compute and storage, Custom Resource Definitions (CRDs) for declarative Postgres cluster management, StackGres supporting sharding with automated multi-cluster deployment, 13 lines of YAML to create 60-node sharded clusters, three interfaces for StackGres including CRDs and web console and REST API, operator written in Java with quarkus unlike typical Go-based operators, Google study showing Java faster than Go, GraalVM native compilation for 80MB container images versus 400-500MB JVM images, fabric8 Kubernetes client for API communication, reconciliation cycle running every 10 seconds to maintain desired state, pod local controller as Quarkus sidecar for local Postgres operations, dynamic extension installation without rebuilding container images, grpc bi-directional communication between control plane and control nodes, inverse connection pattern where nodes initiate connections to control plane, comparison with Jini and JavaSpaces leasing concepts from Sun Microsystems, quarter million lines of Java code in the operator mostly POJOs predating records, PostgreSQL configuration validation with 300+ parameters, automated tuning applied by default in StackGres, potential for LLM-driven optimization with clone clusters for testing, Framework Computer laptop automation with Ubuntu auto-install and Ansible and Nix, five to ten minute full system reinstall including BIOS updates Alvaro Hernandez on twitter: @ahachete
Application Programming Interface (API) pozwala na komunikację pomiędzy programami. Ma ściśle określone reguły, które mówią nam w jaki sposób możemy zażądać informacji od jakiejś aplikacji i czego możemy spodziewać się w odpowiedzi.Czy taka wiedza jest potrzebna Tech Writerom? Według nas jak najbardziej, szczególnie jeśli tworzą dokumentację dla deweloperów. Ten odcinek to podsumowanie wiedzy jaką przekazaliśmy uczestnikom podczas darmowych warsztatów "Using REST APIs for Technical Writers", które przeprowadziliśmy 28.11.2025 przy współpracy z Content Bytes.Rozmawiamy o tym czym jest API, jak sformułować żądanie, które pozwoli nam otrzymać odpowiedź, jakie typy żądań i odpowiedzi istnieją, jakich narzędzi możemy użyć i o innych podstawach, które ułatwią technoskrybom wejście w świat interakcji z aplikacjami poprzez API.Dźwięki wykorzystane w audycji pochodzą z kolekcji "107 Free Retro Game Sounds" dostępnej na stronie https://dominik-braun.net, udostępnianej na podstawie licencji Creative Commons license CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/).Linki:CAKE conf: https://cakeconf.contentbytes.pl/"API", Wikipedia: https://en.wikipedia.org/wiki/API"What Is a REST API? Examples, Uses, and Challenges", Postman Blog: https://blog.postman.com/rest-api-examples/"SOAP", Wikipedia: https://en.wikipedia.org/wiki/SOAPSwagger UI: https://swagger.io/tools/swagger-ui/OpenAPI: https://www.openapis.org/cURL: https://curl.se/Visual Studio (VS) Code: https://code.visualstudio.comREST Client: https://marketplace.visualstudio.com/items?itemName=humao.rest-clientPostman API client: https://www.postman.com/product/api-client/Requestly: https://requestly.com/
Data Science (click here to comment) 8. Dezember 2025, Jochen
✏️ Suscribirse https://youtu.be/rMiaGSWPZ1k WordPress 6.9 trae la Abilities API: el “idioma” que necesita la IA para actuar en tu web. En el episodio 240 de Negocios y WordPress hablamos de una mezcla muy realista: WordPress, inteligencia artificial, herramientas y “vida de autónomo” (con su caos incluido). Pero el centro del capítulo fue claro: WordPress 6.9 y la Abilities API, una pieza que, si trabajas con web y automatización, te cambia el mapa mental. Y sí, también salen temas de Stripe + Make, subidas de precios, Kit Digital, productividad con Todoist, IA en Gmail con Gemini, y ese clásico “tengo 20 suscripciones y debería borrar la mitad”. WordPress 6.9 y la Abilities API: el primer paso serio hacia IA “nativa” WordPress lleva tiempo moviéndose hacia IA, pero en 6.9 se ve un cambio: ya no es “un plugin que hace cosas”, sino infraestructura. La Abilities API introduce un registro central donde WordPress (core), themes y plugins pueden declarar “cosas que saben hacer” de forma estándar: qué hacen, qué entradas aceptan, qué devuelven y qué permisos requieren. Eso convierte WordPress en un sistema descubrible para agentes y automatizaciones. Si te quedas con una frase del episodio, sería esta: Abilities = API autodocumentada para que una IA no tenga que “adivinar” tu WordPress. En vez de pedirle a un modelo que improvise (y se invente macros, endpoints o parámetros), se le ofrece un catálogo de acciones reales. Qué es la Abilities API “en cristiano” y por qué no es solo para IA Abilities como “acciones con contrato” Cada ability es una acción concreta: por ejemplo, “contar posts”, “crear un producto”, “generar un informe”, “crear un CPT”, etc. Se registra en WordPress con wp_register_ability() y debe hacerse en el hook wp_abilities_api_init. Lo interesante es el “contrato”: Input: qué datos necesita (schema). Output: qué devuelve (schema). Permisos: quién puede ejecutarla. Callback: la función real que se ejecuta. Esto no solo sirve a ChatGPT o Claude; sirve a cualquier integración que quiera operar de forma fiable y repetible. Abilities + MCP: el combo que se viene En el episodio se enlaza esto con MCP (Model Context Protocol): si MCP es el “protocolo” para conectar un agente con herramientas, Abilities es el “mapa” de herramientas disponible dentro de WordPress. Por eso se entiende como un paso previo: sin abilities, el agente puede “conectar”, pero no sabe qué puede hacer ni cómo hacerlo. Casos de uso reales: WooCommerce MCP y JetEngine “con esteroides” WooCommerce: hablar con tu tienda (sin tocar la UI) Se mencionó que WooCommerce ya está explorando MCP como feature, activable desde ajustes avanzados (beta), para exponer operaciones de tienda a clientes IA con autenticación y permisos. Esto abre la puerta a cosas como: “¿Qué vendí esta semana?” “Sácame un informe de productos con caída de ventas” “Crea un descuento para esta categoría” “Revisa pedidos pendientes y agrúpalos por estado” WooCommerce lo plantea como una integración MCP estándar para que asistentes interactúen con la tienda. JetEngine: cuando un plugin se vuelve “operable” El ejemplo del episodio con JetEngine (Crocoblock) es justo lo que hace potente este enfoque: si JetEngine expone abilities para crear CPT, metaboxes, taxonomías, queries, etc., el agente ya no “alucina” sobre cómo se configura: ejecuta las operaciones oficiales que el plugin le ofrece. Eso reduce fricción, reduce errores y, sobre todo, reduce la típica situación de: “sí, sí… esto se hace con una macro que no existe”. (Como referencia, Crocoblock ha ido publicando avances de JetEngine 3.8 alrededor de MCP/AI en sus canales). WordPress no va tarde: AI Team y “AI Building Blocks” Otra idea importante del episodio: WordPress no está “desligado” del mundo IA. Existe un trabajo organizado dentro del ecosistema, con iniciativas como AI Building Blocks y el propio equipo de IA publicando avances y guías. Y además, Automattic también está empujando por su lado (WordPress.com, experimentos, etc.). El mensaje de fondo: IA va a entrar “por el core” y por los plugins grandes. Negocio real: cuando subes precios y descubres que Stripe enseña más que un curso En la parte de “vida real”, se contó un caso muy típico pero muy bien aterrizado: subida de precios por un cambio de costes (en el episodio, por el contexto del mercado avícola) y cómo eso te obliga a dominar: Stripe (productos, prices, suscripciones, subscription items) Airtable como fuente de control (campos “price ID actual” vs “price ID anterior”) Make como orquestador SendGrid para el aviso a clientes La clave técnica que se remarca: en Stripe no “editas” un price usado por suscripciones antiguas, sino que acoplas un price nuevo a la suscripción (sin reventar el historial). Eso es lo que te permite hacer una subida ordenada. (Buen recordatorio: el conocimiento útil suele venir de arreglar líos en producción). Productividad y herramientas: menos apps, mejores sistemas Todoist: prioridades y filtros para que no te “manden” las tareas viejas Se habló de algo muy común: no es que el sistema no funcione, es que lo estabas usando de una forma que te empujaba a tareas que no querías hacer hoy. El ajuste fue redefinir el uso de prioridades para separar “mantenimiento que quiero quitarme” del “trabajo duro”. Además, la idea de mover una estrategia grande a un proyecto (para que exista y se ejecute) es oro: si no está en el sistema, se queda en “backlog mental”. Metodología WordPress: tu “caja de herramientas” por defecto Aquí entró un tema muy de especialista: la profesionalización no es solo escribir mejor al cliente, es tener decisiones repetibles: ¿Qué plugin de caché usas según servidor? ¿Cómo haces staging? ¿Qué modo mantenimiento activas? ¿Qué stack aplicas siempre? Aparecen nombres concretos como InstaWP para entornos/staging (con planes y enfoque de gestión de sitios) y el plugin SWIS Performance (del ecosistema EWWW) como alternativa ligera para rendimiento/caché. Y también el apunte de que WP Rocket ha ido moviendo precios/planes en 2025 (lo que, sumado a todo lo demás, refuerza la necesidad de simplificar suscripciones). IA práctica (no humo): Gemini en Gmail, Meet y el coste real del tiempo Aquí hubo un momento muy “esto sí”: Gemini dentro de Gmail para generar borradores con contexto real del hilo (“Help me write”), que no es lo mismo que un redactor externo que se lía con quién dijo qué. Google lo plantea como funcionalidad integrada en Gmail/Workspace con generación de borradores, sugerencias, resúmenes, etc. Además, se mencionó el valor de “notas” en Meet y la idea de sustituir herramientas externas si la integración reduce fricción (y RAM). Punto clave: no es el precio mensual, es cuánto tiempo te devuelve. Manus y el navegador como “agente”: promesa interesante, resultados mixtos Se mencionó Manus como herramienta que, a nivel de “buscar alternativas” (tipo capturas bonitas estilo Pretty Snap), no aportó milagros respecto a ChatGPT. Pero también se comentó algo relevante: Manus lanzó Browser Operator, una extensión para convertir tu navegador Chromium en un entorno donde el agente actúa “dentro” de tus sesiones y logins. Eso, si funciona bien, es un salto práctico para automatizar tareas web reales. Kit Digital: cuando el proyecto se alarga tanto que ya ni sabes cuál es la web buena El episodio cierra con un tema que muchos han vivido: proyectos subvencionados con iteraciones eternas, cambios de interlocutor, silencios, webs duplicadas y la sensación de “otro ultimátum más”. Más allá del desahogo, aquí hay una lección de negocio: la gestión (comunicación, plazos, responsabilidad) es parte del valor, no un extra. Conclusión: Abilities API es el “punto de inflexión” (y lo demás son síntomas) WordPress 6.9 no es solo una versión con mejoras visuales. Con Abilities API, WordPress está preparando el terreno para que la IA deje de ser “un plugin simpático” y pase a ser operativa, conectable y segura, con permisos y contratos claros. Y alrededor de eso, todo encaja: automatizaciones en Make, control de precios en Stripe, herramientas que se integran o estorban, IA que ahorra tiempo de verdad (Gmail), y la necesidad de profesionalizar métodos para no vivir apagando fuegos. Si estás trabajando con WordPress y te interesa el futuro cercano, este es de esos cambios que merece la pena seguir de cerca… y debatir con gente que también está en la trinchera. Enlaces mencionados (internos y externos): Internos: Negocios y WordPress Telegram EliasGomez.pro Externos: Abilities API en WP 6.9 (Field Guide) Introducción Abilities API (Developer News) WooCommerce MCP docs Manus Browser Operator Gemini en Gmail: Help me write FAQ para rich snippets ¿Qué es la Abilities API de WordPress 6.9? Es una API que permite registrar “habilidades” (acciones) de WordPress, plugins y temas en un formato estándar: qué hacen, qué necesitan, qué devuelven y qué permisos requieren. Sirve para IA, automatización e integraciones fiables. ¿En qué se diferencia de la REST API tradicional? La REST API expone endpoints; la Abilities API añade una capa “descubrible” y estandarizada (autodocumentada) para que un agente o herramienta pueda saber qué existe y cómo llamarlo sin leer documentación externa. ¿Qué pinta tiene esto con MCP (Model Context Protocol)? MCP estandariza cómo un agente se conecta a herramientas. Abilities aporta el catálogo de acciones en WordPress para que el agente pueda elegir y ejecutar operaciones con permisos. ¿WooCommerce ya permite esto? WooCommerce ha publicado integración MCP en fase beta y documentación oficial para exponer herramientas de tienda a clientes IA con autenticación y permisos. ¿Qué ganan plugins como JetEngine con esto? Que pueden exponer acciones de su propio sistema (CPT, metaboxes, queries, etc.) para que un agente trabaje “a través del plugin” sin inventarse cómo funciona. Si quieres, lo adapto a tu web con enlaces internos reales por slugs (por ejemplo, a tu página de consultoría, tu categoría de “Medios”, o posts sobre Make/Stripe) y lo dejo listo para pegar en WordPress con bloques.
In this insightful episode, explore why everyone is becoming an AI operator and why mastering prompt engineering is the key differentiator in the modern workforce. Learn how to effectively communicate with AI to get the results you want—a skill that takes just 10 minutes of proper setup versus 50 minutes of back-and-forth fixes. The conversation also delves into global marketing challenges across different cultures and geographical regions, examining real examples from Sweden to the UK to show how cultural nuances still matter despite the supposed cultural flattening from media. A major part of this discussion focuses on demystifying "revolutionary" technology buzzwords. Discover how cloud computing, APIs, and AI integration aren't actually new—they're just rebranded existing concepts with fancy names. From AWS RDS being just hosted MySQL to MCP being REST API calls, learn to see through the marketing terminology and recognize the simplicity beneath. The episode concludes with thought-provoking discussions about AI's potential for real-world task automation, existential risks, and why treating AI like a good team member with proper context and instructions leads to superior results. Perfect for: Content creators, social media managers, and anyone looking to understand modern technology without the jargon. Try Vista Social for FREE today Book a Demo Follow us on Instagram Follow us on LinkedIn Follow us on Youtube
In this Technology Reseller News podcast, Publisher Doug Green interviews David Klebanov, VP of Partner Solutions at Alkira, about the company's two major announcements — the launch of MCP Server and NIA Copilot — both designed to bring AI integration and natural language interaction to network automation and infrastructure management. Alkira, known for pioneering Network Infrastructure as a Service (NIaaS), delivers global connectivity and security entirely through cloud software. The company's latest innovations mark a significant step forward in bridging AI applications and network intelligence. The first announcement, MCP Server (Model Context Protocol Server), standardizes communication between AI applications and infrastructure systems. “MCP acts as a highway between AI and the network,” said Klebanov. “It allows AI models to access telemetry and configuration data in a consistent way, enabling smarter automation, faster response, and more intelligent decision-making.” This new capability complements Alkira's existing automation tools, such as REST APIs and Terraform. MCP provides a third option specifically for AI and agentic environments, allowing developers and DevOps teams to choose the right tool for their workflow — whether they're building in traditional, cloud-native, or AI-powered contexts. The second announcement, NIA (Network Infrastructure Assistant) Copilot, applies the same AI-driven intelligence directly to the network engineer's workflow. Integrated into the Alkira portal, NIA allows administrators to interact with their network using natural human language instead of complex code or command-line interfaces. Engineers can simply ask questions, issue commands, or retrieve data in conversational form — “like talking to ChatGPT, but for your network,” Klebanov explained. Both MCP and NIA are designed to work together: MCP connects AI applications to infrastructure externally, while NIA empowers network teams internally. Together, they deliver real-world value by making complex infrastructure management simpler, faster, and more intelligent. Klebanov emphasized that these tools also open new opportunities for the MSP and channel partner community, noting that Alkira is a 100% channel-focused organization. “AI is front and center for enterprises, and our partners now have a way to deliver real, AI-friendly network solutions — not just theory, but technology that provides immediate value.” Learn more about Alkira's AI-powered network solutions at alkira.com.
Topics covered in this episode: * djrest2 -* A small and simple REST library for Django based on class-based views. Github CLI caniscrape - Know before you scrape. Analyze any website's anti-bot protections in seconds. *
On this episode of Reliability Radio, hosts Jonathan Guiney and Brendan Russ (JLL) welcome Robert Skeirik from i-care, the largest independent vibration service company globally, to discuss the limitations of traditional monitoring and the future of Maximo Predict. Robert reveals the critical flaw in using a single "overall" vibration number, explaining through a compelling "radio dial" analogy why minor but critical signals—like a looming bearing defect—are drowned out by louder faults (unbalance, misalignment). This session details: The Vibration Lie: Why rolling element bearing defects, the #1 cause of rotating equipment failure, are nearly impossible to catch with just overall vibration. Beyond Vibration: How i-care's proprietary sensor uses three parameters (vibration, impacting, and lubrication effectiveness) to provide a true severity trend. CMMS Integration: How their process leverages AI, human analysts, and a REST API connection to your CMMS (Maximo) to generate precise, validated work orders for maintenance planning. Fit for Purpose: Why you must choose monitoring devices that are engineered to catch specific failure modes.
ICC 2025 was a clear level up for the Ignition community. In this conversation Vlad and Dave share on the ground insights from a week of packed sessions, vendor showcases, and ProveIt demonstrations that brought working integrations to life. They unpack why the move to a larger venue created more chances for deep technical conversations, how the community benefited from hands on demos that connected to a shared data backbone, and what record attendance means for the growth of modern SCADA and manufacturing data platforms. The episode then shifts into a focused discussion with Travis Cox from Inductive Automation on the launch of Ignition 8.3 and what it unlocks for builders who care about reliability, scale, and speed.We discuss how 8.3's configuration in the file system and the expanded REST API enable real version control and DevOps workflows in day to day projects. We explore practical AI opportunities through MCP servers that can safely expose context and operational data to large language models, with an emphasis on operator augmentation, faster troubleshooting, and responsible guardrails. We connect the dots between OT networking fundamentals and secure architectures by highlighting the growing need for segmentation, deterministic traffic, and resilient data movement. Throughout the episode we keep the focus on what matters in plants today clear outcomes for uptime, quality, and delivery rather than hype.Whether you are an engineer, integrator, or an operations leader, this episode gives you an actionable snapshot of where Ignition and the broader ecosystem are heading. You will hear what the community is building, which 8.3 features are worth testing first, how ProveIt style showcases help end users evaluate technologies, and why investing in networking skills remains one of the highest ROI moves for manufacturers.Timestamps00:00 Welcome and ICC traditions with stickers and community shoutouts01:25 What to expect today and why this episode includes a sit down with Travis02:30 First impressions of ICC 2025 tracks vendor hall and ProveIt showcases05:55 New Sacramento venue experience and why more space improved conversations07:25 Walk up tickets record attendance and what that signals about growth08:45 Why hands on ProveIt demos mattered for real integrations and learning12:05 Ignition 8.3 launch and what we will cover in more depth later this month13:25 AI themes across sessions and realistic use cases for builders and operators16:20 Why OT networking education is now a must have skill set18:05 DataOps and DevOps directions in Ignition 8.3 and what to trial first23:10 Travis Cox joins with ICC takeaways and how community scale changes the game28:35 Ignition 8.3 highlights configuration in files REST API and version control workflowsAbout the hostsVlad Romanov manufacturing modernization and data strategy consultant co host of Manufacturing Hub and founder of JoltekLinkedIn https://www.linkedin.com/in/vladromanovJoltek https://www.joltek.comDave Griffith operations and digital transformation consultant co host of Manufacturing HubLinkedIn https://www.linkedin.com/in/davegriffith23Website https://dave-griffith.comGuestTravis Cox Chief Evangelist at Inductive AutomationLinkedIn https://www.linkedin.com/in/traviscox-automationInductive Automation https://inductiveautomation.comEpisode references and resourcesIgnition 8.3 What is new https://inductiveautomation.com/ignition/whatsnewIgnition User Manual 8.3 docs and upgrade guidance https://www.docs.inductiveautomation.comDownload Ignition free trial https://inductiveautomation.com/downloadsInductive University free Ignition training https://inductiveuniversity.comICC 2025 recap https://inductiveautomation.com/blog/icc-2025-recap-we-really-did-level-up-this-yearControl Global highlights from ICC 2025 https://www.controlglobal.com/industry-news/news/55321625/highlights-from-inductive-automations-2025-icc-build-a-thon-and-award-winnersProveIt Conference official site https://www.proveitconference.comProveIt at ICC background https://inductiveautomation.com/blog/proveit-showcases-are-coming-to-icc-2025Books and learning mentioned or implied in the discussionNetworking and cybersecurity training via Inductive University https://inductiveuniversity.comIgnition 8.3 videos and feature overviews https://www.youtube.com/watch?v=Qds7RI9-hxgConnect with Manufacturing HubApple Podcasts https://podcasts.apple.com/us/podcast/manufacturing-hub/id1546805573Spotify https://open.spotify.com/show/1gE6glbxdYIfG6KUeOCz22Call to actionIf you attended ICC this year or tested Ignition 8.3 in your environment, share your lessons in the comments. Tell us which features you want us to deep dive next and what ProveIt demonstrations helped you make decisions in your own stack. Subscribe for weekly conversations with practitioners who build real systems in real factories.
Shawn Tierney meets up with Connor Mason of Software Toolbox to learn their company, products, as well as see a demo of their products in action in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 248 Show Notes: Special thanks to Software Toolbox for sponsoring this episode so we could release it “ad free!” To learn about Software Toolbox please checkout the below links: TOP Server Cogent DataHub Industries Case studies Technical blogs Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Welcome back to the automation podcast. My name is Shawn Tierney with Insights and Automation, and I wanna thank you for tuning back in this week. Now this week on the show, I meet up with Connor Mason from Software Toolbox, who gives us an overview of their product suite, and then he gives us a demo at the end. And even if you’re listening, I think you’re gonna find the demo interesting because Connor does a great job of talking through what he’s doing on the screen. With that said, let’s go ahead and jump into this week’s episode with Connor Mason from Software Toolbox. I wanna welcome Connor from Software Toolbox to the show. Connor, it’s really exciting to have you. It’s just a lot of fun talking to your team as we prepared for this, and, I’m really looking forward to because I just know in your company over the years, you guys have so many great solutions that I really just wanna thank you for coming on the show. And before you jump into talking about products and technologies Yeah. Could you first tell us just a little bit about yourself? Connor Mason (Guest): Absolutely. Thanks, Shawn, for having us on. Definitely a pleasure to be a part of this environment. So my name is Connor Mason. Again, I’m with Software Toolbox. We’ve been around for quite a while. So we’ll get into some of that history as well before we get into all the the fun technical things. But, you know, I’ve worked a lot with the variety of OT and IT projects that are ongoing at this point. I’ve come up through our support side. It’s definitely where we grow a lot of our technical skills. It’s a big portion of our company. We’ll get that into that a little more. Currently a technical application consultant lead. So like I said, I I help run our support team, help with these large solutions based projects and consultations, to find what’s what’s best for you guys out there. There’s a lot of different things that in our in our industry is new, exciting. It’s fast paced. Definitely keeps me busy. My background was actually in data analytics. I did not come through engineering, did not come through the automation, trainings at all. So this is a whole new world for me about five years ago, and I’ve learned a lot, and I really enjoyed it. So, I really appreciate your time having us on here, Shawn Tierney (Host): Shawn. Well, I appreciate you coming on. I’m looking forward to what you’re gonna show us today. I had a the audience should know I had a little preview of what they were gonna show, so I’m looking forward to it. Connor Mason (Guest): Awesome. Well, let’s jump right into it then. So like I said, we’re here at Software Toolbox, kinda have this ongoing logo and and just word map of connect everything, and that’s really where we lie. Some people have called us data plumbers in the past. It’s all these different connections where you have something, maybe legacy or something new, you need to get into another system. Well, how do you connect all those different points to it? And, you know, throughout all these projects we worked on, there’s always something unique in those different projects. And we try to work in between those unique areas and in between all these different integrations and be something that people can come to as an expert, have those high level discussions, find something that works for them at a cost effective solution. So outside of just, you know, products that we offer, we also have a lot of just knowledge in the industry, and we wanna share that. You’ll kinda see along here, there are some product names as well that you might recognize. Our top server and OmniServer, we’ll be talking about LOPA as well. It’s been around in the industry for, you know, decades at this point. And also our symbol factory might be something you you may have heard in other products, that they actually utilize themselves for HMI and and SCADA graphics. That is that is our product. So you may have interacted it with us without even knowing it, and I hope we get to kind of talk more about things that we do. So before we jump into all the fun technical things as well, I kind of want to talk about just the overall software toolbox experience as we call it. We’re we’re more than just someone that wants to sell you a product. We we really do work with, the idea of solutions. How do we provide you value and solve the problems that you are facing as the person that’s actually working out there on the field, on those operation lines, and making things as well. And that’s really our big priority is providing a high level of knowledge, variety of the things we can work with, and then also the support. It’s very dear to me coming through the the support team is still working, you know, day to day throughout that software toolbox, and it’s something that has been ingrained into our heritage. Next year will be thirty years of software toolbox in 2026. So we’re established in 1996. Through those thirty years, we have committed to supporting the people that we work with. And I I I can just tell you that that entire motto lives throughout everyone that’s here. So from that, over 97% of the customers that we interact with through support say they had an awesome or great experience. Having someone that you can call that understands the products you’re working with, understands the environment you’re working in, understands the priority of certain things. If you ever have a plant shut down, we know how stressful that is. Those are things that we work through and help people throughout. So this really is the core pillars of Software Toolbox and who we are, beyond just the products, and and I really think this is something unique that we have continued to grow and stand upon for those thirty years. So jumping right into some of the industry challenges we’ve been seeing over the past few years. This is also a fun one for me, talking about data analytics and tying these things together. In my prior life and education, I worked with just tons of data, and I never fully knew where it might have come from, why it was such a mess, who structured it that way, but it’s my job to get some insights out of that. And knowing what the data actually was and why it matters is a big part of actually getting value. So if you have dirty data, if you have data that’s just clustered, it’s in silos, it’s very often you’re not gonna get much value out of it. This was a study that we found in 2024, from Garner Research, And it said that, based on the question that business were asked, were there any top strategic priorities for your data analytics functions in 2024? And almost 50%, it’s right at ’49, said that they wanted to improve data quality, and that was a strategic priority. This is about half the industry is just talking about data quality, and it’s exactly because of those reasons I said in my prior life gave me a headache, to look at all these different things that I don’t even know where they became from or or why they were so different. And the person that made that may have been gone may not have the contacts, and making that from the person that implemented things to the people that are making decisions, is a very big task sometimes. So if we can create a better pipeline of data quality at the beginning, makes those people’s lives a lot easier up front and allows them to get value out of that data a lot quicker. And that’s what businesses need. Shawn Tierney (Host): You know, I wanna just data quality. Right? Mhmm. I think a lot of us, when we think of that, we think of, you know, error error detection. We think of lost connections. We think of, you know, just garbage data coming through. But I I think from an analytical side, there’s a different view on that, you know, in line with what you were just saying. So how do you when you’re talking to somebody about data quality, how do you get them to shift gears and focus in on what you’re talking about and not like a quality connection to the device itself? Connor Mason (Guest): Absolutely. Yeah. We I kinda live in both those worlds now. You know, I I get to see that that connection state. And when you’re operating in real time, that quality is also very important to you. Mhmm. And I kind of use that at the same realm. Think of that when you’re thinking in real time, if you know what’s going on in the operation and where things are running, that’s important to you. That’s the quality that you’re looking for. You have to think beyond just real time. We’re talking about historical data. We’re talking about data that’s been stored for months and years. Think about the quality of that data once it’s made up to that level. Are they gonna understand what was happening around those periods? Are they gonna understand what those tags even are? Are they gonna understand what those conventions that you’ve implemented, to give them insights into this operation. Is that a clear picture? So, yeah, you’re absolutely right. There are two levels to this, and and that is a big part of it. The the real time data and historical, and we’re gonna get some of that into into our demo as well. It it’s a it’s a big area for the business, and the people working in the operations. Shawn Tierney (Host): Yeah. I think quality too. Think, you know, you may have data. It’s good data. It was collected correctly. You had a good connection to the device. You got it. You got it as often as you want. But that data could really be useless. It could tell you nothing. Connor Mason (Guest): Right. Exactly. Shawn Tierney (Host): Right? It could be a flow rate on part of the process that irrelevant to monitoring the actual production of the product or or whatever you’re making. And, you know, I’ve known a lot of people who filled up their databases, their historians, with they just they just logged everything. And it’s like a lot of that data was what I would call low quality because it’s low information value. Right? Absolutely. I’m sure you run into that too. Connor Mason (Guest): Yeah. We we run into a lot of people that, you know, I’ve got x amount of data points in my historian and, you know, then we start digging into, well, I wanna do something with it or wanna migrate. Okay. Like, well, what do you wanna achieve at the end of this? Right? And and asking those questions, you know, it’s great that you have all these things historized. Are you using it? Do you have the right things historized? Are they even set up to be, you know, worked upon once they are historized by someone outside of this this landscape? And I think OT plays such a big role in this, and that’s why we start to see the convergence of the IT and OT teams just because that communication needs to occur sooner. So we’re not just passing along, you know, low quality data, bad quality data as well. And we’ll get into some of that later on. So to jump into some of our products and solutions, I kinda wanna give this overview of the automation pyramid. This is where we work from things like the field device communications. And you you have certain sensors, meters, actuators along the actual lines, wherever you’re working. We work across all the industries, so this can vary between those. Through there, you work up kind of your control area. A lot of control engineers are working. This is where I think a lot of the audience is very familiar with PLCs. Your your typical name, Siemens, Rockwell, your Schneiders that are creating, these hardware products. They’re interacting with things on the operation level, and they’re generating data. That that was kind of our bread and butter for a very long time and still is that communication level of getting data from there, but now getting it up the stack further into the pyramid of your supervisory, MES connections, and it’ll also now open to these ERP. We have a lot of large corporations that have data across variety of different solutions and also want to integrate directly down into their operation levels. There’s a lot of value to doing that, but there’s also a lot of watch outs, and a lot of security concerns. So that’ll be a topic that we’ll be getting into. We also all know that the cloud is here. It’s been here, and it’s it’s gonna continue to push its way into, these cloud providers into OT as well. There there’s a lot of benefit to it, but there there’s also some watch outs as this kind of realm, changes in the landscape that we’ve been used to. So there’s a lot of times that we wanna get data out there. There’s value into AI agents. It’s a hot it’s a hot commodity right now. Analytics as well. How do we get those things directly from shop floor, up into the cloud directly, and how do we do that securely? It’s things that we’ve been working on. We’ve had successful projects, continues to be an interest area and I don’t see it slowing down at all. Now, when we kind of begin this level at the bottom of connectivity, people mostly know us for our top server. This is our platform for industrial device connectivity. It’s a thing that’s talking to all those different PLCs in your plant, whether that’s brownfield or greenfield. We pretty much know that there’s never gonna be a plant that’s a single PLC manufacturer, that exists in one plant. There’s always gonna be something that’s slightly different. Definitely from Brownfield, things different engineers made different choices, things have been eminent, and you gotta keep running them. TopServe provides this single platform to connect to a long laundry list of different PLCs. And if this sounds very familiar to Kepserver, well, you’re not wrong. Kepserver is the same exact technology that TopServer is. What’s the difference then is probably the biggest question we usually get. The difference technology wise is nothing. The difference in the back end is that actually it’s all the same product, same product releases, same price, but we have been the biggest single source of Kepserver or Topsyra implementation into the market, for almost two plus decades at this point. So the single biggest purchase that we own this own labeled version of Kepserver to provide to our customers. They interact with our support team, our solutions teams as well, and we sell it along the stack of other things because it it fits so well. And we’ve been doing this since the early two thousands when, Kepware was a a much smaller company than it is now, and we’ve had a really great relationship with them. So if you’ve enjoyed the technology of of Kepserver, maybe there’s some users out there. If you ever heard of TopServer and that has been unclear, I hope this clear clarifies it. But it it is a great technology stack that that we build upon and we’ll get into some of that in our demo. Now the other question is, what if you don’t have a standard communication protocol, like a modbus, like an Allen Bradley PLC as well? We see this a lot with, you know, testing areas, pharmaceuticals, maybe also in packaging, barcode scanners, weigh scales, printers online as well. They they may have some form of basic communications that talks over just TCP or or serial. And how do you get that information that’s really valuable still, but it’s not going through a PLC. It’s not going into your typical agent mind SCADA. It might be very manual process for a lot of these test systems as well, how they’re collecting and analyzing the data. Well, you may have heard of our Arm server as well. It’s been around, like I said, for a couple decades and just a proven solution that without coding, you can go in and build a custom protocol that expects a format from that device, translates it, puts it into standard tags, and now that those tags can be accessible through the open standards of OPC, or to it was a a Veeva user suite link as well. And that really provides a nice combination of your standard communications and also these more custom communications may have been done through scripting in the past. Well, you know, put this onto, an actual server that can communicate through those protocols natively, and just get that data into those SCADA systems, HMIs, where you need it. Shawn Tierney (Host): You know, I used that. Many years ago, I had an integrator who came to me. He’s like, Shawn, I wanna this is back in the RSVUE days. He’s like, Shawn, I I got, like, 20 Euotherm devices on a four eighty five, and they speak ASCII, and I gotta I gotta get into RSVUE 32. And, you know, OmniSIR, I love that you could you could basically developing and we did Omega and some other devices too. You’re developing your own protocol, but it’s beautiful. And and the fact that when you’re testing it, it color codes everything. So you know, hey. That part worked. The header worked. The data worked. Oh, the trailing didn’t work, or the terminated didn’t work, or the data’s not in the right format. Or I just it was a joy to work with back then, and I can imagine it’s only gotten better since. Connor Mason (Guest): Yeah. I think it’s like a little engineer playground where you get in there. It started really decoding and seeing how these devices communicate. And then once you’ve got it running, it it’s one of those things that it it just performs and, is saved by many people from developing custom code, having to manage that custom code and integrations, you know, for for many years. So it it’s one of those things that’s kinda tried, tested, and, it it’s kind of a staple still our our base level communications. Alright. So moving along kind of our automation pyramid as well. Another part of our large offering is the Cogent data hub. Some people may have heard from this as well. It’s been around for a good while. It’s been part of our portfolio for for a while as well. This starts building upon where we had the communication now up to those higher echelons of the pyramid. This is gonna bring in a lot of different connectivities. You if you’re not if you’re listening, it it’s kind of this cog and spoke type of concept for real time data. We also have historical implementations. You can connect through a variety of different things. OPC, both the profiles for alarms and events, and even OPC UA’s alarming conditions, which is still getting adoption across the, across the industry, but it is growing. As part of the OPC UA standard, we have integrations to MQTT. It can be its own MQTT broker, and it can also be an MQTT client. That has grown a lot. It’s one of those things that lives be besides OPC UA, not exactly a replacement. If you ever have any questions about that, it’s definitely a topic I love to talk about. There’s space for for this to combine the benefits of both of these, and it’s so versatile and flexible for these different type of implementations. On top of that, it it’s it’s a really strong tool for conversion and aggregation. You kind of add this, like, its name says, it’s a it’s a data hub. You send all the different information to this. It stores it into, a hierarchy with a variety of different modeling that you can do within it. That’s gonna store these values across a standard data format. Once I had data into this, any of those different connections, I can then send data back out. So if I have anything that I know is coming in through a certain plug in like OPC, bring that in, send it out to on these other ones, OPC, DA over to MQTT. It could even do DDA if I’m still using that, which I probably wouldn’t suggest. But overall, there’s a lot of good benefits from having something that can also be a standardization, between all your different connections. I have a lot of different things, maybe variety of OPC servers, legacy or newer. Bring that into a data hub, and then all your other connections, your historians, your MAS, your SCADAs, it can connect to that single point. So it’s all getting the same data model and values from a single source rather than going out and making many to many connections. A a large thing that it was originally, used for was getting around DCOM. That word is, you know, it might send some shivers down people’s spines still, to this day, but it’s it’s not a fun thing to deal with DCOM and also with the security hardening. It’s just not something that you really want to do. I’m sure there’s a lot of security professionals would advise against EPRA doing it. This tunneling will allow you to have a data hub that locally talks to any of the DA server client, communicate between two data hubs over a tunnel that pushes the data just over TCP, takes away all the comm wrappers, and now you just have values that get streamed in between. Now you don’t have to configure any DCOM at all, and it’s all local. So a lot of people went transitioning, between products where maybe the server only supports OPC DA, and then the client is now supporting OPC UA. They can’t change it yet. This has allowed them to implement a solution quickly and cost and at a cost effective price, without ripping everything out. Shawn Tierney (Host): You know, I wanna ask you too. I can see because this thing is it’s a data hub. So if you’re watching and you’re if you’re listening and not watching, you you’re not gonna see, you know, server, client, UAD, a broker, server, client. You know, just all these different things up here on the site. Do you what how does somebody find out if it does what they need? I mean, do you guys have a line they can call to say, I wanna do this to this. Is that something Data Hub can do, or is there a demo? What would you recommend to somebody? Connor Mason (Guest): Absolutely. Reach out to us. We we have a a lot of content outline, and it’s not behind any paywall or sign in links even. You you can always go to our website. It’s just softwaretoolbox.com. Mhmm. And that’s gonna get you to our product pages. You can download any product directly from there. They have demo timers. So typically with, with coaching data hub, after an hour, it will stop. You can just rerun it. And then call our team. Yeah. We have a solutions team that can work with you on, hey. What do I need as well? Then our support team, if you run into any issues, can help you troubleshoot that as well. So, I’ll have some contact information at the end, that’ll get some people to, you know, where they need to go. But you’re absolutely right, Shawn. Because this is so versatile, everyone’s use case of it is usually something a little bit different. And the best people to come talk to that is us because we’ve we’ve seen all those differences. So Shawn Tierney (Host): I think a lot of people run into the fact, like, they have a problem. Maybe it’s the one you said where they have the OPC UA and it needs to connect to an OPC DA client. And, you know, and a lot of times, they’re they’re a little gunshot to buy a license because they wanna make sure it’s gonna do exactly what they need first. And I think that’s where having your people can, you know, answer their questions saying, yes. We can do that or, no. We can’t do that. Or, you know, a a demo that they could download and run for an hour at a time to actually do a proof of concept for the boss who’s gonna sign off on purchasing this. And then the other thing is too, a lot of products like this have options. And you wanna make sure you’re buying the ticking the right boxes when you buy your license because you don’t wanna buy something you’re not gonna use. You wanna buy the exact pieces you need. So I highly recommend I mean, this product just does like, I have, in my mind, like, five things I wanna ask right now, but not gonna. But, yeah, def definitely, when it when it comes to a product like this, great to touch base with these folks. They’re super friendly and helpful, and, they’ll they’ll put you in the right direction. Connor Mason (Guest): Yeah. I I can tell you that’s working someone to support. Selling someone a solution that doesn’t work is not something I’ve been doing. Bad day. Right. Exactly. Yeah. And we work very closely, between anyone that’s looking at products. You know, me being as technical product managers, well, I I’m engaged in those conversations. And Mhmm. Yeah. If you need a demo license, reach out to us to extend that. We wanna make sure that you are buying something that provides you value. Now kind of moving on into a similar realm. This is one of our still somewhat newer offerings, I say, but we’ve been around five five plus years, and it’s really grown. And I kinda said here, it’s called OPC router, and and it’s not it’s not a networking tool. A lot of people may may kinda get that. It’s more of a, kind of a term about, again, all these different type of connections. How do you route them to different ways? It it kind of it it separates itself from the Cogent data hub, and and acting at this base level of being like a visual workflow that you can assign various tasks to. So if I have certain events that occur, I may wanna do some processing on that before I just send data along, where the data hub is really working in between converting, streaming data, real time connections. This gives you a a kind of a playground to work around of if I have certain tasks that are occurring, maybe through a database that I wanna trigger off of a certain value, based on my SCADA system, well, you can build that in in these different workflows to execute exactly what you need. Very, very flexible. Again, it has all these different type of connections. The very unique ones that have also grown into kind of that OT IT convergence, is it can be a REST API server and client as well. So I can be sending out requests to, RESTful servers where we’re seeing that hosted in a lot of new applications. I wanna get data out of them. Or once I have consumed a variety of data, I can become the REST server in OPC router and offer that to other applications to request data from itself. So, again, it can kind of be that centralized area of information. The other thing as we talked about in the automation pyramid is it has connections directly into SAP and ERP systems. So if you have work orders, if you have materials, that you wanna continue to track and maybe trigger things based off information from your your operation floors via PLCs tracking, how they’re using things along the line, and that needs to match up with what the SAP system has for, the amount of materials you have. This can be that bridge. It’s really is built off the mindset of the OT world as well. So we kinda say this helps empower the OT level because we’re now giving them the tools to that they understand what what’s occurring in their operations. And what could you do by having a tool like this to allow you to kind of create automated workflows based off certain values and certain events and automate some of these things that you may be doing manually or doing very convoluted through a variety of solutions. So this is one of those prod, products as well that’s very advanced in the things that supports. Linux and Docker containers is, is definitely could be a hot topic, rightly fleet rightfully so. And this can run on a on a Docker container deployed as well. So we we’ve seen that with the I IT folks that really enjoy being able to control and to higher deployment, allows you to update easily, allows you to control and spin up new containers as well. This gives you a lot of flexibility to to deploy and manage these systems. Shawn Tierney (Host): You know, I may wanna have you back on to talk about this. I used to there’s an old product called Rascal that I used to use. It was a transaction manager, and it would based on data changing or on a time that as a trigger, it could take data either from the PLC to the database or from the database to the PLC, and it would work with stored procedures. And and this seems like it hits all those points, And it sounds like it’s a visual like you said, right there on the slide, visual workflow builder. Connor Mason (Guest): Yep. Shawn Tierney (Host): So you really piqued my interest with this one, and and it may be something we wanna come back to and and revisit in the future, because, it just it’s just I know that that older product was very useful and, you know, it really solved a lot of old applications back in the day. Connor Mason (Guest): Yeah. Absolutely. And this this just takes that on and builds even more. If you if anyone was, kind of listening at the beginning of this year or two, a conference called Prove It that was very big in the industry, we were there to and we presented on stage a solution that we had. Highly recommend going searching for that. It’s on our web pages. It’s also on their YouTube links, and it’s it’s called Prove It. And OPC router was a big part of that in the back end. I would love to dive in and show you the really unique things. Kind of as a quick overview, we’re able to use Google AI vision to take camera data and detect if someone was wearing a hard hat. All that logic and behind of getting that information to Google AI vision, was through REST with OPC router. Then we were parsing that information back through that, connection and then providing it back to the PLCs. So we go all the way from a camera to a PLC controlling a light stack, up to Google AI vision through OPC router, all on hotel Wi Fi. It’s very imp it’s very, very fun presentation, and, our I think our team did a really great job. So a a a pretty new offering I have I wanna highlight, is our is our data caster. This is a an actual piece of hardware. You know, our software toolbox is we we do have some hardware as well. It’s just, part of the nature of this environment of how we mesh in between things. But the the idea is that, there’s a lot of different use cases for HMI and SCADA. They have grown so much from what they used to be, and they’re very core part of the automation stack. Now a lot of times, these are doing so many things beyond that as well. What we found is that in different areas of operations, you may not need all that different control. You may not even have the space to make up a whole workstation for that as well. What this does, the data caster, is, just simply plug it plugs it into any network and into an HDMI compatible display, and it gives you a very easy configure workplace to put a few key metrics onto a screen. So if I have different things from you can connect directly to PLCs like Allen Bradley. You can connect to SQL databases. You can also connect to rest APIs to gather the data from these different sources and build a a a kind of easy to to view, KPI dashboard in a way. So if you’re on a operation line and you wanna look at your current run rate, maybe you have certain things in the POC tags, you know, flow and pressure that’s very important for those operators to see. They may not be, even the capacity to be interacting with anything. They just need visualizations of what’s going on. This product can just be installed, you know, industrial areas with, with any type of display that you can easily access and and give them something that they can easily look at. It’s configured all through a web browser to display what you want. You can put on different colors based on levels of values as well. And it’s just I feel like a very simple thing that sometimes it seems so simple, but those might be the things that provide value on the actual operation floor. This is, for anyone that’s watching, kind of a quick view of a very simple screen. What we’re showing here is what it would look like from all the different data sources. So talking directly to ControlLogs PLC, talking to SQL databases, micro eight eight hundreds, an arrest client, and and what’s coming very soon, definitely by the end of this year, is OPC UA support. So any OPC UA server that’s out there that’s already having your PLC data or etcetera, this could also connect to that and get values from there. Shawn Tierney (Host): Can I can you make it I’m I’m here I go? Can you make it so it, like, changes, like, pages every few seconds? Connor Mason (Guest): Right now, it is a single page, but this is, like I said, very new product, so we’re taking any feedback. If, yeah, if there’s this type of slideshow cycle that would be, you know, valuable to anyone out there, let us know. We’re definitely always interested to see the people that are actually working out at these operation sites, what what’s valuable to them. Yeah. Shawn Tierney (Host): A lot of kiosks you see when when you’re traveling, it’ll say, like, line one well, I’ll just throw out there. Line one, and that’ll be on there for five seconds, and then it’ll go line two. That’ll be on there for five seconds, and then line you know, I and that’s why I just mentioned that because I can see that being a question that, that that I would get from somebody who is asking me about it. Connor Mason (Guest): Oh, great question. Appreciate it. Alright. So now we’re gonna set time for a little hands on demo. For anyone that’s just listening, we’re gonna I’m gonna talk about this at at a high level and walk through everything. But the idea is that, we have a few different POCs, very common in Allen Bradley and just a a Siemens seven, s seven fifteen hundred that’s in our office, pretty close to me on the other side of the wall wall, actually. We’re gonna first start by connecting that to our top server like we talked about. This is our industrial communication server, that offers both OCDA, OC UA, SweetLink connectivity as well. And then we’re gonna bring this into our Cogent data hub. This we talked about is getting those values up to these higher levels. What we’ll be doing is also tunneling the data. We talked about being able to share data through the data hubs themselves. Kinda explain why we’re doing that here and the value you can add. And then we’re also gonna showcase adding on MQTT to this level. Taking beta now just from these two PLCs that are sitting on a rack, and I can automatically make all that information available in the MQTT broker. So any MQTT client that’s out there that wants to subscribe to that data, now has that accessible. And I’ve created this all through a a really simple workflow. We also have some databases connected. Influx, we install with Code and DataHub, has a free visualization tool that kinda just helps you see what’s going on in your processes. I wanna showcase a little bit of that as well. Alright. So now jumping into our demo, when we first start off here is the our top server. Like I mentioned before, if anyone has worked with KEP server in the past, this is gonna look very similar. Like it because it is. The same technology and all the things here. The the first things that I wanted to establish in our demo, was our connection to our POCs. I have a few here. We’re only gonna use the Allen Bradley and the Siemens, for the the time that we have on our demo here. But how this builds out as a platform is you create these different channels and the devices connections between them. This is gonna be your your physical connections to them. It’s either, IP TCPIP connection or maybe your serial connection as well. We have support for all of them. It really is a long list. Anyone watching out there, you can kind of see all the different drivers that that we offer. So allowing this into a single platform, you can have all your connectivity based here. All those different connections that you now have that up the stack, your SCADA, your historians, MAS even as well, they can all go to a single source. Makes that management, troubleshooting, all those a bit easier as well. So one of the first things I did here, I have this built out, but I’ll kinda walk through what you would typically do. You have your Allen Bradley ControlLogix Ethernet driver here first. You know, I have some IPs in here I won’t show, but, regardless, we have our our our drivers here, and then we have a set of tags. These are all the global tags in the programming of the PLC. How I got these to to kind of map automatically is in our in our driver, we’re able to create tags automatically. So you’re able to send a command to that device and ask for its entire tag database. They can come back, provide all that, map it out for you, create those tags as well. This saves a lot of time from, you know, an engineer have to go in and, addressing all the individual items themselves. So once it’s defined in the program project, you’re able to bring this all in automatically. I’ll show now how easy that makes it connecting to something like the Cogent data hub. In a very similar fashion, we have a connection over here to the Siemens, PLC that I also have. You can see beneath it all these different tag structures, and this was created the exact same way. While those those PLC support it, you can do an automatic tag generation, bring in all the structure that you’ve already built out your PLC programming, and and make this available on this OPC server now as well. So that’s really the basis. We first need to establish communications to these PLCs, get that tag data, and now what do we wanna do with it? So in this demo, what I wanted to bring up was, the code in DataHub next. So here, I see a very similar kind of layout. We have a different set set of plugins on the left side. So for anyone listening, the Cogent Data Hub again is kind of our aggregation and conversion tool. All these different type of protocols like OPC UA, OPC DA, and OPC A and E for alarms and events. We also support OPC alarms and conditions, which is the newer profile for alarms in OPC UA. We have all a variety of different ways that you can get data out of things and data’s into the data hub. We can also do bridging. This concept is, how you share data in between different points. So let’s say I had a connection to one OPC server, and it was communicating to a certain PLC, and there were certain registers I was getting data from. Well, now I also wanna connect to a different OPC server that has, entirely different brand of PLCs. And then maybe I wanna share data in between them directly. Well, with this software, I can just bridge those points between them. Once they’re in the data hub, I can do kind of whatever I want with them. I can then allow them to write between those PLCs and share data that way, and you’re not now having to do any type of hardwiring directly in between them, and then I’m compatible to communicate to each other. Through the standards of OPC and these variety of different communication levels, I can integrate them together. Shawn Tierney (Host): You know, you bring up a good point. When you do something like that, is there any heartbeat? Like, is there on the general or under under, one of these, topics? Is there are there tags we can use that are from DataHub itself that can be sent to the destination, like a heartbeat or, you know, the merge transactions? Or Connor Mason (Guest): Yeah. Absolutely. So with this as well, there’s pretty strong scripting engine, and I have done that in the past where you can make internal tags. And that that could be a a timer. It could be a counter. And and just kind of allows you to create your own tags as well that you could do the same thing, could share that, through bridge connection to a PLC. So, yeah, there there are definitely some people that had those cert and, you know, use cases where they wanna get something to just track, on this software side and get it out to those hardware PLCs. Absolutely. Shawn Tierney (Host): I mean, when you send out the data out of the PLC, the PLC doesn’t care to take my data. But when you’re getting data into the PLC, you wanna make sure it’s updating and it’s fresh. And so, you know, they throw a counter in there, the script thing, and be able to have that. As as long as you see that incrementing, you know, you got good data coming in. That’s that’s a good feature. Connor Mason (Guest): Absolutely. You know, another big one is the the redundancy. So what this does is beyond just the OPC, we can make redundancy to basically anything that has two things running of it. So any of these different connections. How it’s unique is what it does is it just looks at the buckets of data that you create. So for an example, if I do have two different OPC servers and I put them into two areas of, let’s say, OPC server one and OPC server two, I can what now create an OPC redundancy data bucket. And now any client that connects externally to that and wants that data, it’s gonna go talk to that bucket of data. And that bucket of data is going to automatically change in between sources as things go down, things come back up, and the client would never know what’s hap what that happened unless you wanted to. There are internal tasks to show what’s the current source and things, but the idea is to make this trans kind of hidden that regardless of what’s going on in the operations, if I have this set up, I can have my external applications just reading from a single source without knowing that there’s two things behind it that are actually controlling that. Very important for, you know, historian connections where you wanna have a full complete picture of that data that’s coming in. If you’re able to make a redundant connection to two different, servers and then allow that historian to talk to a single point where it doesn’t have to control that switching back and forth. It it will just see that data flow streamlessly as as either one is up at that time. Kinda beyond that as well, there’s quite a few other different things in here. I don’t think we have time to cover all of them. But for for our demo, what I wanna focus on first is our OPC UA connection. This allows us both to act as a OPC UA client to get data from any servers out there, like our top server. And also we can act as an OPC UA server itself. So if anything’s coming in from maybe you have multiple connections to different servers, multiple connections to other things that aren’t OPC as well, I can now provide all this data automatically in my own namespace to allow things to connect to me as well. And that’s part of that aggregation feature, and kind of topic I was mentioning before. So with that, I have a connection here. It’s pulling data all from my top server. I have a few different tags from my Alec Bradley and and my Siemens PLC selected. The next part of this, while I was meshing, was the tunneling. Like I said, this is very popular to get around DCOM issues, but there’s a lot of reasons why you still may use this beyond just the headache of DCOM and what it was. What this runs on is a a TCP stream that takes all the data points as a value, a quality, and a timestamp, and it can mirror those in between another DataHub instance. So if I wanna get things across a network, like my OT side, where NASH previously, I would have to come in and allow a, open port onto my network for any OPC UA clients, across the network to access that, I can now actually change the direction of this and allow me to tunnel data out of my network without opening up any ports. This is really big for security. If anyone out there, security professional or working as an engineer, you have to work with your IT and security a lot, they don’t you don’t wanna have an open port, especially to your operations and OT side. So this allows you to change that direction of flow and push data out of this direction into another area like a DMZ computer or up to a business level computer as well. The other things as well that I have configured in this demo, the benefit of having that tunneling streaming data across this connection is I can also store this data locally in a, influx database. The purpose of that then is that I can actually historize this, provide then if this connection ever goes down to backfill any information that was lost during that tunnel connection going down. So with this added layer on and real time data scenarios like OPC UA, unless you have historical access, you would lose a lot of data if that connection ever went down. But with this, I can actually use the back end of this InfluxDB, buffer any values. When my connection comes back up, pass them along that stream again. And if I have anything that’s historically connected, like, another InfluxDB, maybe a PI historian, Vue historian, any historian offering out there that can allow that connection. I can then provide all those records that were originally missed and backfill that into those systems. So I switched over to a second machine. It’s gonna look very similar here as well. This also has an instance of the Cogent Data Hub running here. For anyone not watching, what we’ve actually have on this side is the the portion of the tunneler that’s sitting here and listening for any data requests coming in. So on my first machine, I was able to connect my PLCs, gather that information into Cogent DataHub, and now I’m pushing that information, across the network into a separate machine that’s sitting here and listening to gather information. So what I can quickly do is just make sure I have all my data here. So I have these different points, both from my Allen Bradley PLCs. I have a few, different simulation demo points, like temperature, pressure, tank level, a few statuses, and all this is updating directly through that stream as the PLC is updating it as well. I also have my scenes controller. I have some, current values and a few different counters tags as well. All of this again is being directly streamed through that tunnel. I’m not connecting to an OPC server at all on this side. I can show you that here. There’s no connections configured. I’m not talking to the PLCs directly on this machine as well. But maybe we’ll pass all the information through without opening up any ports on my OT demo machine per se. So what’s the benefit of that? Well, again, security. Also, the ability to do the store and forward mechanisms. On the other side, I was logging directly to a InfluxDB. This could be my d- my buffer, and then I was able to configure it where if any values were lost, to store that across the network. So now with this side, if I pull up Chronic Graph, which is a free visualization tool that installs with the DataHub as well, I can see some very nice, visual workflows and and visual diagrams of what is going on with this data. So I have a pressure that is just a simulator in this, Allen Bradley PLC that ramps up and and comes back down. It’s not actually connected to anything that’s reading a real pressure, but you can see over time, I can kind of change through these different layers of time. And I might go back a little far, but I have a lot of data that’s been stored in here. For a while during my test, I turned this off and, made it fail, but then I came back in and I was able to recreate all the data and backfill it as well. So through through these views, I can see that as data disconnects, as it comes back on, I have a very cyclical view of the data because it was able to recover and store and forward from that source. Like I said, Shawn, data quality is a big thing in this industry. It’s a big thing for people both at the operations side, and both people making decision in the business layer. So being able to have a full picture, without gaps, it is definitely something that, you should be prioritizing, when you can. Shawn Tierney (Host): Now what we’re seeing here is you’re using InfluxDB on this, destination PC or IT side PC and chronograph, which was that utility or that package that comes, gets installed. It’s free. But you don’t actually have to use that. You could have sent this in to an OSI pi or Exactly. Somebody else’s historian. Right? Can you name some of the historians you work with? I know OSI pie. Connor Mason (Guest): Yeah. Yeah. Absolutely. So there’s quite a few different ones. As far as what we support in the Data Hub natively, Amazon Kinesis, the cloud hosted historian that we can also do the same things from here as well. Aviva Historian, Aviva Insight, Apache Kafka. This is a a kind of a a newer one as well that used to be a very IT oriented solution, now getting into OT. It’s kind of a similar database structure where things are stored in different topics that we can stream to. On top of that, just regular old ODBC connections. That opens up a lot of different ways you can do it, or even, the old classic OPC, HDA. So if you have any, historians that that can act as an OPC HDA, connection, we we can also stream it through there. Shawn Tierney (Host): Excellent. That’s a great list. Connor Mason (Guest): The other thing I wanna show while we still have some time here is that MQTT component. This is really growing and, it’s gonna continue to be a part of the industrial automation technology stack and conversations moving forward, for streaming data, you know, from devices, edge devices, up into different layers, both now into the OT, and then maybe out to, IT, in our business levels as well, and definitely into the cloud as we’re seeing a lot of growth into it. Like I mentioned with Data Hub, the big benefit is I have all these different connections. I can consume all this data. Well, I can also act as an MQTT broker. And what what a broker typically does in MQTT is just route data and share data. It’s kind of that central point where things come to it to either say, hey. I’m giving you some new values. Share it with someone else. Or, hey. I need these values. Can you give me that? It really fits in super well with what this product is at its core. So all I have to do here is just enable it. What that now allows is I have an example, MQTT Explorer. If anyone has worked with MQTT, you’re probably familiar with this. There’s nothing else I configured beyond just enabling the broker. And you can see within this structure, I have all the same data that was in my Data Hub already. The same things I were collecting from my PLCs and top server. Now I’ve embedded these as MPPT points and now I have them in JSON format with the value, their timestamp. You can even see, like, a little trend here kind of matching what we saw in Influx. And and now this enables all those different cloud connectors that wanna speak this language to do it seamlessly. Shawn Tierney (Host): So you didn’t have to set up the PLCs a second time to do this? Nope. Connor Mason (Guest): Not at all. Shawn Tierney (Host): You just enabled this, and now the data’s going this way as well. Exactly. Connor Mason (Guest): Yeah. That’s a really strong point of the Cogent Data Hub is once you have everything into its structure and model, you just enable it to use any of these different connections. You can get really, really creative with these different things. Like we talked about with the the bridging aspect and getting into different systems, even writing down the PLCs. You can make crust, custom notifications and email alerts, based on any of these values. You could even take something like this MTT connection, tunnel it across to another data hub as well, maybe then convert it to OPC DA. And now you’ve made a a a new connection over to something that’s very legacy as well. Shawn Tierney (Host): Yeah. That, I mean, the options here are just pretty amazing, all the different things that can be done. Connor Mason (Guest): Absolutely. Well, I, you know, I wanna jump back into some of our presentation here while we still got the time. And now after we’re kinda done with our demo, there’s so many different ways that you can use these different tools. This is just a really simple, kind of view of the, something that used to be very simple, just connecting OpenSea servers to a variety of different connections, kind of expanding onto with that that’s store and forward, the local influx usage, getting out to things like MTT as well. But there’s a lot more you can do with these solutions. So like Shawn said, reach out to us. We’re happy to engage and see what we can help you with. I have a few other things before we wrap up. Just overall, it we’ve worked across nearly every industry. We have installations across the globe on all continents. And like I said, we’ve been around for pushing thirty years next year. So we’ve seen a lot of different things, and we really wanna talk to anyone out there that maybe has some struggles that are going on with just connectivity, or you have any ongoing projects. If you work in these different industries or if there’s nothing marked here and you have anything going on that you need help with, we’re very happy to sit down and let you know if there’s there’s something we can do there. Shawn Tierney (Host): Yeah. For those who are, listening, I mean, we see most of the big energy and consumer product, companies on that slide. So I’m not gonna read them off, but, it’s just a lot of car manufacturers. You know, these are these are these, the household name brands that everybody knows and loves. Connor Mason (Guest): So kind of wrap some things up here. We talked about all the different ways that we’ve kind of helped solve things in the past, but I wanna highlight some of the unique ones, that we’ve also gone do some, case studies on and and success stories. So this one I actually got to work on, within the last few years that, a plastic packaging, manufacturer was looking to track uptime and downtime across multiple different lines, and they had a new cloud solution that they were already evaluating. They’re really excited to get into play. They they had a lot of upside to, getting things connected to this and start using it. Well, what they had was a lot of different PLCs, a lot of different brands, different areas, different, you know, areas of operation that they need to connect to. So what they used was to first get that into our top server, kind of similar to how they showed them use in their in our demo. We just need to get all the data into a centralized platform first, get that data accessible. Then from there, once they had all that information into a centralized area, they used the Cogent Data Hub as well to help aggregate that information and transform it to be sent to the cloud through MQTT. So very similar to the demo here, this is actually a real use case of that. Getting information from PLCs, structuring it into that how that cloud system needed it for MQTT, and streamlining that data connection to now where it’s just running in operation. They constantly have updates about where their lines are in operation, tracking their downtime, tracking their uptime as well, and then being able to do some predictive analytics in that cloud solution based on their history. So this really enabled them to kind of build from what they had existing. It was doing a lot of manual tracking, into an entirely automated system with management able to see real views of what’s going on at this operation level. Another one I wanna talk about was we we were able to do this success story with, Ace Automation. They worked with a pharmaceutical company. Ace Automation is a SI and they were brought in and doing a lot of work with some some old DDE connections, doing some custom Excel macros, and we’re just having a hard time maintaining some legacy systems that were just a pain to deal with. They were working with these older files, from some old InTouch histor HMIs, and what they needed to do was get something that was not just based on Excel and doing custom macros. So one product we didn’t get to talk about yet, but we also carry is our LGH file inspector. It’s able to take these files, put them out into a standardized format like CSV, and also do a lot of that automation of when when should these files be queried? Should they be, queried for different lengths? Should they be output to different areas? Can I set these up in a scheduled task so it can be done automatically rather than someone having to sit down and do it manually in Excel? So they will able to, recover over fifty hours of engineering time with the solution from having to do late night calls to troubleshoot a, Excel macro that stopped working, from crashing machines, because they were running a legacy systems to still support some of the DDE servers, into saving them, you know, almost two hundred plus hours of productivity. Another example, if we’re able to work with a renewable, energy customer that’s doing a lot of innovative things across North America, They had a very ambitious plan to double their footprint in the next two years. And with that, they had to really look back at their assets and see where they currently stand, how do we make new standards to support us growing into what we want to be. So with this, they had a lot of different data sources currently. They’re all kind of siloed at the specific areas. Nothing was really connected commonly to a corporate level area of historization, or control and security. So again, they they were able to use our top server and put out a standard connectivity platform, bring in the DataHub as an aggregation tool. So each of these sites would have a top server that was individually collecting data from different devices, and then that was able to send it into a single DataHub. So now their corporate level had an entire view of all the information from these different plants in one single application. That then enabled them to connect their historian applications to that data hub and have a perfect view and make visualizations off of their entire operations. What this allowed them to do was grow without replacing everything. And that’s a big thing that we try to strive on is replacing and ripping out all your existing technologies. It’s not something you can do overnight. But how do we provide value and gain efficiency with what’s in place and providing newer technologies on top of that without disrupting the actual operation as well? So this was really, really successful. And at the end, I just wanna kind of provide some other contacts and information people can learn more. We have a blog that goes out every week on Thursdays. A lot of good technical content out there. A lot of recast of the the awesome things we get to do here, the success stories as well, and you can always find that at justblog.softwaretoolbox.com. And again, our main website is justsoftwaretoolbox.com. You can get product information, downloads, reach out to anyone on our team. Let’s discuss what what issues you have going on, any new projects, we’ll be happy to listen. Shawn Tierney (Host): Well, Connor, I wanna thank you very much for coming on the show and bringing us up to speed on not only software toolbox, but also to, you know, bring us up to speed on top server and doing that demo with top server and data hub. Really appreciate that. And, I think, you know, like you just said, if anybody, has any projects that you think these solutions may be able to solve, please give them a give them a call. And if you’ve already done something with them, leave a comment. You know? To leave a comment, no matter where you’re watching or listening to this, let us know what you did. What did you use? Like me, I used OmniServer all those many years ago, and, of course, Top Server as an OPC server. But if you guys have already used Software Toolbox and, of course, Symbol Factory, I use that all the time. But if you guys are using it, let us know in the comments. It’s always great to hear from people out there. I know, you know, with thousands of you guys listening every week, but I’d love to hear, you know, are you using these products? Or if you have questions, I’ll funnel them over to Connor if you put them in the comments. So with that, Connor, did you have anything else you wanted to cover before we close out today’s show? Connor Mason (Guest): I think that was it, Shawn. Thanks again for having us on. It was really fun. Shawn Tierney (Host): I hope you enjoyed that episode, and I wanna thank Connor for taking time out of his busy schedule to come on the show and bring us up to speed on software toolbox and their suite of products. Really appreciated that demo at the end too, so we actually got a look at if you’re watching. Gotta look at their products and how they work. And, just really appreciate them taking all of my questions. I also appreciate the fact that Software Toolbox sponsored this episode, meaning we were able to release it to you without any ads. So I really appreciate them. If you’re doing any business with Software Toolbox, please thank them for sponsoring this episode. And with that, I just wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️ If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content
Let us go through some of the recent updates from the @atlassian ecosystem #JiraRESTAPIChanges #BitbucketAppPassword #JiraAlignRoadmap #StateofProduct2026https://www.ravisagar.in/videos/atlassian-updates-jira-rest-api-bitbucket-app-password-jira-align-roadmap-state-product
When converting an existing REST API to the Model Context Protocol, what should you consider? What anti-patterns should you avoid to keep an AI agent's context clean? This week on the show, Kyle Stratis returns to discuss his upcoming book, "AI Agents with MCP".
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Linux Namespaces Linux namespaces can be used to control networking features on a process-by-process basis. This is useful when trying to present a different network environment to a process being analysed. https://isc.sans.edu/diary/Sinkholing%20Suspicious%20Scripts%20or%20Executables%20on%20Linux/32144 Coyote in the Wild: First-Ever Malware That Abuses UI Automation Akamai identified malware that takes advantage of Microsoft s UI Automation Framework to programatically interact with the user s system and steal credentials. https://www.akamai.com/blog/security-research/active-exploitation-coyote-malware-first-ui-automation-abuse-in-the-wild Testing REST APIs with Autoswagger The tool Autoswagger can be used to automate the testing of REST APIs following the OpenAPI/Swagger standard. https://github.com/intruder-io/autoswagger/
“You can now connect everything—cloud workloads, offices, data centers, users—on one software-defined network, without the hardware.” — Elizabeth Aris, CEO, Astrotel In this Technology Reseller News podcast, Publisher Doug Green interviews David Klebanov, a leading architect at Alkira, and Elizabeth Aris, CEO of Astrotel, to explore how their partnership is delivering Network Infrastructure as a Service (NIaaS) to global enterprises—without the capital investment or complexity of traditional networking. Born in the Cloud, Built for the AI Era Alkira, founded in 2018, was born with a vision to deliver networking the same way cloud services are consumed: on demand, scalable, and usage-based. The Alkira platform runs entirely in the cloud—across AWS, Azure, and Google Cloud—allowing enterprises to deploy secure, high-performance global networks in hours, not months. “Our platform is the networking and security foundation for today's distributed, cloud-first enterprise,” said Klebanov. “And it's built to support AI workloads and multi-cloud architectures from day one.” Channel-Driven and Globally Deployed Astrotel, based in Sydney, is a service provider and Alkira partner helping customers across the Asia-Pacific region deploy network services in minutes using Alkira's cloud-native platform. Aris emphasized the advantage: no hardware, no CapEx, and up to 60% cost savings. “We're setting up entire global networks using Alkira's software-based cloud exchange points. It's a massive differentiator,” said Aris. “From remote Australia to Hong Kong to London, you just connect your sites to the nearest cloud region—there's no need to negotiate with local telcos or install new gear.” Enterprise-Grade Security and Simplicity The Alkira platform integrates firewall, DNS, DHCP, and other critical services directly into its virtualized architecture. IT teams can manage and secure the network through Alkira's centralized portal—or via REST APIs and Terraform for programmatic deployments. A single control plane manages the entire hybrid network—connecting public cloud workloads, private data centers, branch offices, remote users, and partner networks. This includes secure extranet support for M&A environments, allowing partial segmentation during network convergence. A Win for Carriers, Too Interestingly, carriers are also using Alkira to extend their reach. Aris explained that tier-one telcos are deploying Alkira to serve customers in markets where they lack infrastructure. “Instead of buying wholesale access from foreign providers, they spin up Alkira and deliver services in hours. It's flexible, fast, and operationally simple.” Where to Learn More Visit alkira.com to explore Alkira's cloud-first networking platform. Visit astrotel.io to learn how Astrotel deploys Alkira services across Asia-Pacific. Partner Opportunities: Alkira's 100% channel-focused model is open to partners in the U.S., Canada, and globally who are ready to offer NIaaS and cloud-native connectivity solutions.
API First - это просто! Просто садимся и делаем крутой API: универсальный, изящный, дико сложный в реализации и абсолютно бесполезный для использования! А как бы так, чтобы без последних двух пунктов?Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Музыка из выпуска: - https://artists.landr.com/056870627229- https://t.me/angry_programmer_screamsВесь плейлист курса "Kubernetes для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3SrrmOzzdBBsdeQ0YVR3Fc7Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:05:00 Кто должен разрабатывать спеку API?00:18:30 Spec First vs Code First vs ...00:22:30 Когда спеки не сходятся00:24:30 Ревью спеки00:30:50 Красивая API, которая никому не нужна00:33:30 Backend 4 Frontend00:45:00 Баланс согласования00:49:25 API as a product00:57:50 Регрессия01:03:20 GraphQL, OData01:19:30 Тестирование моками01:21:05 Версионирование API01:31:30 Как рекламировать спеки, витрина API01:34:00 Open Close Principle для API01:36:00 Безопасность01:39:15 API экономикаСсылки:- https://swagger.io/resources/articles/adopting-an-api-first-approach/ : Неплохая статья- https://github.com/OpenAPITools/openapi-generator : То что заставит всех использовать спеку! Видео: https://youtube.com/live/p_KIy8rTWRs Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/
Here comes SQL Server 2025! While at Build, Richard chatted with Bob Ward about releasing a preview version of SQL Server 2025. Bob discusses SQL Server 2025 as an AI-ready enterprise database with numerous capabilities specifically tailored to your organization's AI needs, including a new vector data type. This includes making REST API calls to Azure OpenAI, Ollama, or OpenAI. This is also the version of SQL Server designed to integrate with Microsoft Fabric through mirroring. There are many more features, even a new icon!LinksSQL Server 2025 AnnouncementJSON Data TypeOllamaRecorded May 20, 2025
This is episode 296 recorded on June 6th, 2025, where John & Jason talk the Microsoft Fabric May 2025 Feature Summary including a REST API updates for Fabric, updates to User Data Functions, Copilot in Power BI support for Fabric data agents, CosmosDB in Fabric, DataFlows Gen 2 CI/CD support is now GA, updates to Data Pipelines & Mirroring, and much more. For show notes please visit www.bifocal.show
Join Dan Vega and DaShaun Carter for the latest updates from the Spring Ecosystem. In this episode, Dan and DaShaun are joined by Spring team member Dave Syer to explore the official Spring gRPC project. At the end of this episode, you will understand how Spring gRPC provides a Spring-friendly API for developing high-performance gRPC applications, including server and client autoconfiguration, Spring Boot integration, and streamlined Protocol Buffer handling. The hosts will demonstrate how to build gRPC services with Spring's familiar dependency injection patterns and discuss the benefits of gRPC's HTTP/2-based communication over traditional REST APIs. You can participate in our live stream to ask questions or catch the replay on your preferred podcast platform.Show Notes: Dave Syer on GitHubDave Syer on BlueskySpring gRPC
"We're here to take the complexity out of unified communications — and turn it into simplicity." — Todd Remely, Unimax At Cisco Live 2025 in San Diego, Technology Reseller News publisher Doug Green caught up with Todd Remely of Unimax to explore how the company is streamlining unified communications (UC) management for enterprises and partners alike. With over 30 years in business, Unimax is a veteran in the telecom software space. Their tools help organizations manage Cisco, Microsoft Teams, Zoom, and Avaya UC systems more efficiently — and that value was on full display across two booths at Cisco Live: one in the Collaboration Village (highlighting Webex integration) and another on the main show floor. Three Ways Unimax Delivers Simplicity in Complex UC Environments: Second Nature A power-user interface that layers over Cisco and other major UC platforms, enabling administrators to perform complex MACDs (moves, adds, changes, deletes) and configuration tasks like provisioning, routing, and device pool management — all from one streamlined dashboard. Automation Platform Unimax enables full automation of provisioning and deprovisioning processes. Their platform integrates with Active Directory, ServiceNow, Remedy, HRIS systems, and any REST API-enabled tool, drastically reducing time and human error in user onboarding and offboarding. HelpOne A lightweight interface that empowers Tier 1 help desk agents to complete routine MACDs — such as password resets — without telecom expertise, freeing up UC teams to focus on higher-priority work. Remely noted strong engagement from MSPs and resellers, many of whom use Unimax's multi-tenant solutions to manage customer UC systems at scale. "We're solution-focused," he said. "And we love working with end users — because that's where the real problems are, and that's where we can help most." For more information or to request a demo, visit unimax.com or contact the team at tellmemore@unimax.com.
An airhacks.fm conversation with RichardBair (@RichardBair) about: the relaxed nature of JavaOne keynote presentations with James Gosling, the experience of delivering live demos versus pre-recorded content, impressions of the recent JavaOne conference with 70% new attendees, the Hashgraph team including former Sun/Oracle employees like Josh Marinacci and Jasper Potts, explanation of Hedera Hashgraph's consensus service as a message bus system, discussion of a practical enterprise use case for Hashgraph to create immutable release pipelines, storing release stages as messages in a topic, capturing build metadata including dependencies and test results on the blockchain, the ability to run your own mirror node to query data for free, the potential to create a release pipeline listener that triggers actions based on blockchain messages, the advantage of having an immutable audit trail for compliance purposes, the possibility of creating an enterprise gateway that handles payment and provides REST APIs, the difference between consensus nodes and mirror nodes, the benefits of using blockchain for software supply chain verification, the performance capabilities of the system for reading thousands of messages per second RichardBair on twitter: @RichardBair
In this episode, Lois Houston and Nikita Abraham, along with Nick Wagner, focus on GoldenGate's terminology and architectural evolution. Nick defines source and target systems, which are crucial for data replication, and then moves on to explain the data extraction and replication processes. He also talks about the new microservices architecture, which replaces the classic architecture, offering benefits like simplified management, enhanced security, and a user-friendly interface. Nick highlights how this architecture facilitates easy upgrades and provides a streamlined experience for administrators. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston: Director of Innovation Programs. Lois: Hi there! Thanks for joining us again as we make our way through Oracle GoldenGate 23ai. Last week, we discussed all the new features introduced in 23ai and today, we'll move on to the terminology, the different processes and what they do, and the architecture of the product at a high level. 00:56 Nikita: Back with us is Nick Wagner, Senior Director of Product Management for Oracle GoldenGate. Hi Nick! Let's get into some of the terminology. What do we actually call stuff in GoldenGate? Nick: Within GoldenGate, we have our source systems and our target systems. The source is where we're going to be capturing data from, the targets, where we're going to be applying data into. And when we start talking about things like active-active or setting up GoldenGate for high availability, where your source can also be your target, it does become a little bit more complex. And so in some of those cases, we might refer to things as East and West, or America and Europe, or different versions of that. We also have a couple of different things within the product itself. We have what we call our Extract and our Replicat. The Extract is going to be the process that pulls the data out of the database, our capture technology. Our Replicat's going to be the one that applies the data into the target system, or you can also look at it as a push technology. We have what we call our Distribution Path. Our Distribution Path is going to be how we're sending the data across the network. A lot of times when customers run GoldenGate, they don't have the luxury of just having a single server of GoldenGate that can pull data from one database and push data into another one. They need to set up multiple hops of that data. And so in that case, we would use what we call a Distribution Path to send that data from one system to the next. We also have what we call a Target Initiated Path. It's kind of a subset of your Distribution Path, but it allows you to communicate from a less secure environment into a more secure environment. 02:33 Lois: Nick, what about parameter names. I've seen them in uppercase…title case…does that matter? Nick: GoldenGate has a lot of parameters. This is something you'll see all over the place within GoldenGate itself. These parameters are in your Extract and Replicat parameter files during your distribution path parameter files. Parameters for GoldenGate are case insensitive. Within your own environments, you can set it up to have lowercase, mixed case, whatever you want, but just be aware that they are case insensitive. GoldenGate doesn't care, it's just for readability. And then we also have something called trail files. Trail files is where GoldenGate stores all the data before we're able to apply it into that target system. Think about it as our queuing mechanism, and we're queuing everything outside the database so that we're not overloading those database environments. And that's some of the terminology for the product itself. We also have microservices within GoldenGate. 03:31 Nikita: And at the heart of everything is the Service Manager, right? Talk to us about what it is and what it does. Nick: The service manager is responsible for making sure that everything else is up and running. If you are familiar with GoldenGate classic architecture, this is kind of similar to a GoldenGate manager where that process was there to make sure that processes were running the trail files, or excuse me, that certain error logs were getting written out. If a process went down, the manager would restart that process. The service manager is performing a lot of those same functions. Now attached to the service manager, we have our configuration service. This is new in GoldenGate 23ai. This configuration service is going to allow you to set up GoldenGate for highly available environments. So you can build HA into GoldenGate itself using the configuration service. 04:22 Lois: And what does this configuration service do? Nick: This configuration service essentially moves the checkpoint files that used to be on disk into a database so that everything can be stored inside of a database. Also attached to the service manager, we have the performance metric service. This is a service that is going to be gathering all the performance metrics of GoldenGate. So it's going to tell you how fast things are going, what the latencies are, how many bytes per second we're reading from, the transaction logs or writing to our trail files. How quickly a distribution path is sending data across a network. If you want to know any of your lag information, you'll get it from the performance metrics server. We also have the receiver service and the distribution service. These two work hand in hand to establish network communication between two GoldenGate environments. So on what we call our source system, we have a distribution service that's going to send the data to our target system. On the target system, a receiver service is going to receive that data and then rewrite the trail files. We also have the administration service that's responsible for authentication and authorization of the users, as well as making sure that people have access to the right information. 05:33 Nikita: Ok. Moving on the deployment, how is GoldenGate actually deployed, Nick? Nick: GoldenGate is kinda nice. So the way that the product is installed is you install the GoldenGate environment and that's what we call our service manager deployment under a specific GoldenGate home. So the software binaries themselves get installed under a home, we'll say U01/OGG23AI. Now once I've installed GoldenGate once, that's my OGG home. I can now have any number of service managers and deployments tied to that same home. 06:11 Lois: Ok, let's work with an example to make this simpler. Let's say I've got a service manager that's going be responsible for three different deployments: Accounting, Finance, and Sales. Nick: Each of these deployments is going to reside in its own directory. Each of these deployments is going to have its own set of microservices. And so this also means that each of these deployments can have their own set of users. So the people that access the GoldenGate accounting deployment can be different than the ones that access the sales deployment. This means with this distribution of roles that I can have somebody come in and administer the sales database, but they wouldn't have any information or any access to accounting or finance. And this is very important, it allows you to really pull that information apart and separate it. Each of these environments also has their own set of parameter files, Extract process, Replicat process, distribution services, and everything. So it's a very nice way of splitting things up, but all having them tied to the same GoldenGate home system. And this home is very important. So I can take a deployment, let's say my finance deployment, and if I want to move it to a new GoldenGate home and that GoldenGate home is a different version, like let's say that my original home is 23.4, my new GoldenGate home is 23.7, I simply stop that GoldenGate deployment. I stopped at a finance deployment. I changed its OGG home from 23.4 to 23.7. I restart the deployment, that deployment is automatically upgraded to the new environment and attached to the new system. So it makes upgrading very, very simple, very easy, very elegant. 07:53 Nikita: Ok. So, we've spoken about the services…some of the terminology. Let's get into the architecture next. Nick: So when we talk about the architecture for GoldenGate, we used to have two different architectures. We had a classic architecture and a microservices architecture. Classic architecture was something that's been around since the very beginning of GoldenGate in the late '90s. We announced that, that architecture was deprecated in 19c. And Oracle deprecated means that feature is no longer going to be enhanced and it'll be patched selectively. And at some point in the future, it'll be entirely desupported. Well, GoldenGate 23ai is that future. And so in 23ai, the classic architecture is desupported, that means that it's no longer in the build at all. And so it's just microservices architecture. 08:41 Lois: Is there a tool to assist with this migration? Nick: We do have a migration utility that will convert an old classic architecture into the new microservices architecture. But there is quite a bit of learning curve to the new microservices architecture. So it's important that we go through how it works in the changes. 09:04 Are you looking to optimize your implementation strategies and improve efficiency? We have a solution for you! Our new Oracle Fusion Cloud Applications Foundations training and certification program. You'll learn to leverage Oracle Modern Best Practice (OMBP) to re-imagine business processes using advanced technologies in Oracle Fusion Cloud Applications such as AI, mobile, analytics, and more. Visit mylearn.oracle.com to get started today. 09:37 Nikita: Welcome back! Nick, what are the benefits of this microservices architecture? Nick: It's got that simplified lifecycle for patching and upgrading. A lot of the GoldenGate patches that you get, especially these bundle patches, are complete installs as well. So you can go into My Oracle Support and download a complete install of a patch and that way, you don't have to use old patch to apply them. The only time you'll be using old patch is for one-off patches or smaller patches that need to be applied to your GoldenGate system. The microservices product has the same trusted Capture and Apply process that Classic did. There's almost no changes between the two except on how they communicate with their parent processes. And so the same logic that you use to pull data from Oracle or to apply data into Oracle is all the same. 10:25 Lois: And has the interface been upgraded as well? Nick: We've added a really nice, easy to use web interface for the microservices version of GoldenGate. Not only is this web interface work with all your standard browsers, but it's also mobile friendly too. So I can actually control and administer GoldenGate right through my mobile device. It also has new secure remote administration. This is something that the classic architecture was really missing. And so in the classic architecture, to use the command line interface, you had to log into the database server where GoldenGate was installed. Now, the command line interface, as well as the web interface and the REST API, all use remote administration and authentication. So that means that I can install the new command line interface or what we call admin client on my laptop locally and I can connect to any GoldenGate deployment as long as I have the username and password for that deployment. It's also more secure. GoldenGate microservices can also be deployed on premise or in OCI as a service and now also on these third-party clouds like Azure and Google Cloud. And it's also easier for developers to integrate in with the APIs themselves. Everything that GoldenGate does through the admin client as well as the web UI can all be traced. The REST API calls for GoldenGate are all fully published so you can get them right directly from the documentation, you can build your own web interface if you want to. So it makes it very easy. The REST APIs are also streamlined. With a single REST API call, I can do something like add an Extract process, create it, set up my parameter file, and set up the trail files all with a single API command. Whereas in the past, it would require multiple command line interface commands to do that same thing. So it's extremely elegant, very advanced. 12:16 Nikita: What does the microservices architecture look like? I know it's a bit complicated when we're not actually looking at a diagram of it, but just a high level, can you explain the different parts of it? Nick: It's pretty straightforward. But essentially what you've got on each system is a service manager. That service manager is then going to have a number of processes or services beneath it. It'll have the configuration service that stores the checkpoint information for GoldenGate. It'll have the administrative service for the authentication and users, the distribution service to send the data across a network, a receiver service to receive that information, performance metrics to get the performance statistics out of GoldenGate. And then of course, you also have your Extracts and Replicats that capture and apply technology. Each of those Extracts and Replicats will then connect to a database on the Extract side of things. That Extract is going to write to trail files. Those trail files are then going to be sent across the network where they're rebuilt on the target system and the Replicat's going to consume them and apply them into the target database. So the Replicat behaves almost like an end user. So it's taking that trail file data and simply converting it to DML operations, insert, update, delete, or a DDL operation in the case of Oracle, alter table, create table, et cetera, to go into that target database. 13:39 Lois: To look at a diagram of this architecture and learn about it in more detail, check out the Oracle GoldenGate 23ai Fundamentals course on mylearn.oracle.com. So, Nick, if I'm looking to deploy GoldenGate, what should I primarily keep in mind? Nick: So as you go to install GoldenGate and you look at a deployment, there's a couple of important environment variables that you want to make sure you're aware of. So one of the first ones is your OGG_Home. This environment variable is extremely important. This is the location of the GoldenGate software itself. And I want to stress how important it is to always use version numbers when you're setting up your GoldenGate home. When you go to install the software, if you're installing GoldenGate 23.5, use 23.5 within the home directory structure. If you're installing GoldenGate 23.7, use 23.7 inside that directory structure. 14:33 Nikita: Right… that way I'll always know which versions are which, and it'll make it really easy to upgrade and move from one version to the next. Ok, got it. What else, Nick? Nick: There's a couple other important directories. You have your OGG_ETC_HOME. This is where things like the configuration files are going to reside, parameter files, all your certificates for security, including the wallets where we store the credentials for not only the database accounts, but also for the GoldenGate user accounts as well. We have our GoldenGate variable home directory or VAR home. This is where all the GoldenGate log files are residing. And these are the log files that allow you to see what's going on in GoldenGate for auditing purposes. Anytime anybody makes a change to GoldenGate, you're going to see information go into the log files on what was happening and how it was working and what they did, what time they did, what command they issued. Another big important feature about these log files is it also gives you error information and troubleshooting details. So if you ever need to find out what happened in GoldenGate, what went wrong, you would look at these log files to find out that information. And then you also have your OGG_DATA_HOME. This is where those trail files are going to go. Essentially, this is kind of the queuing or overflow for GoldenGate. There's a couple of other additional components. We've got the admin client. This is our command line utility. If you don't want to use a web browser or prefer a command line utility, you can use the admin client. The admin client is also fully scriptable. So if you wanted to write scripts that would go off and automate things in GoldenGate, you can do that. A lot of customers did that with GGSCI in the classic architecture. You can do the same thing now with the admin client. The other component is the microservices security authentication and authorization services. These handle communication security, especially making sure that any passwords or usernames and everything like that is all encrypted. And instead of using an actual username and password, everything through the product is going to be done through an alias. And then it also handles all the authorization authentication, permissions, user accountability, and roles within GoldenGate. 16:39 Lois: Anything else you'd like to talk about before we wrap up for today, Nick? Nick: I also wanted to take a minute to talk about the REST API. All the microservices provide REST APIs to administer them and all of these are fully documented. They can be used by any client that can make REST API calls. So if you wanted to use Python, cURL, a web browser, you can do that as well. They're all just HTTP or HTTPS calls, get, put, patch, the standard REST API standards. And then GoldenGate does provide our admin client as well as a WebUI that use these REST APIs under the covers if you ever wanted to get a more advanced look at how it works. 17:18 Nikita: Well, that's all the time we have for today. Thanks for joining us, Nick. Lois: Yes, thanks Nick. We look forward to having you back next week to talk with us about security strategies and data recovery. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:43 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
React Core team member Dan Abramov joins us to explore "JSX over the wire" and the evolving architecture of React Server Components. We dive into the shift from traditional REST APIs to screen-specific data shaping, the concept of Backend for Frontend (BFF), and why centering UI around the user experience—not server/client boundaries—matters more than ever. Links https://danabra.mov https://github.com/gaearon https://bsky.app/profile/danabra.mov https://overreacted.io https://www.youtube.com/@danabramov Resources JSX Over The Wire: https://overreacted.io/jsx-over-the-wire/ Impossible Components: https://overreacted.io/impossible-components/ What Does "use client" Do?: https://overreacted.io/what-does-use-client-do/ Our Journey With Caching: https://nextjs.org/blog/our-journey-with-caching https://parceljs.org https://nextjs.org/docs/app We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Dan Abramov.
In this episode, Lois Houston and Nikita Abraham continue their deep dive into Oracle GoldenGate 23ai, focusing on its evolution and the extensive features it offers. They are joined once again by Nick Wagner, who provides valuable insights into the product's journey. Nick talks about the various iterations of Oracle GoldenGate, highlighting the significant advancements from version 12c to the latest 23ai release. The discussion then shifts to the extensive new features in 23ai, including AI-related capabilities, UI enhancements, and database function integration. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we introduced Oracle GoldenGate and its capabilities, and also spoke about GoldenGate 23ai. In today's episode, we'll talk about the various iterations of Oracle GoldenGate since its inception. And we'll also take a look at some new features and the Oracle GoldenGate product family. 00:57 Lois: And we have Nick Wagner back with us. Nick is a Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I think the last time we had an Oracle University course was when Oracle GoldenGate 12c was out. I'm sure there's been a lot of advancements since then. Can you walk us through those? Nick: GoldenGate 12.3 introduced the microservices architecture. GoldenGate 18c introduced support for Oracle Autonomous Data Warehouse and Autonomous Transaction Processing Databases. In GoldenGate 19c, we added the ability to do cross endian remote capture for Oracle, making it easier to set up the GoldenGate OCI service to capture from environments like Solaris, Spark, and HP-UX and replicate into the Cloud. Also, GoldenGate 19c introduced a simpler process for upgrades and installation of GoldenGate where we released something called a unified build. This means that when you install GoldenGate for a particular database, you don't need to worry about the database version when you install GoldenGate. Prior to this, you would have to install a version-specific and database-specific version of GoldenGate. So this really simplified that whole process. In GoldenGate 23ai, which is where we are now, this really is a huge release. 02:16 Nikita: Yeah, we covered some of the distributed AI features and high availability environments in our last episode. But can you give us an overview of everything that's in the 23ai release? I know there's a lot to get into but maybe you could highlight just the major ones? Nick: Within the AI and streaming environments, we've got interoperability for database vector types, heterogeneous capture and apply as well. Again, this is not just replication between Oracle-to-Oracle vector or Postgres to Postgres vector, it is heterogeneous just like the rest of GoldenGate. The entire UI has been redesigned and optimized for high speed. And so we have a lot of customers that have dozens and dozens of extracts and replicats and processes running and it was taking a long time for the UI to refresh those and to show what's going on within those systems. So the UI has been optimized to be able to handle those environments much better. We now have the ability to call database functions directly from call map. And so when you do transformation with GoldenGate, we have about 50 or 60 built-in transformation routines for string conversion, arithmetic operation, date manipulation. But we never had the ability to directly call a database function. 03:28 Lois: And now we do? Nick: So now you can actually call that database function, database stored procedure, database package, return a value and that can be used for transformation within GoldenGate. We have integration with identity providers, being able to use token-based authentication and integrate in with things like Azure Active Directory and your other single sign-on for the GoldenGate product itself. Within Oracle 23ai, there's a number of new features. One of those cool features is something called lock-free reservation columns. So this allows you to have a row, a single row within a table and you can identify a column within that row that's like an inventory column. And you can have multiple different users and multiple different transactions all updating that column within that same exact row at that same time. So you no longer have row-level locking for these reservation columns. And it allows you to do things like shopping carts very easily. If I have 500 widgets to sell, I'm going to let any number of transactions come in and subtract from that inventory column. And then once it gets below a certain point, then I'll start enforcing that row-level locking. 04:43 Lois: That's really cool… Nick: The one key thing that I wanted to mention here is that because of the way that the lock-free reservations work, you can have multiple transactions open on the same row. This is only supported for Oracle to Oracle. You need to have that same lock-free reservation data type and availability on that target system if GoldenGate is going to replicate into it. 05:05 Nikita: Are there any new features related to the diagnosability and observability of GoldenGate? Nick: We've improved the AWR reports in Oracle 23ai. There's now seven sections that are specific to Oracle GoldenGate to allow you to really go in and see exactly what the GoldenGate processes are doing and how they're behaving inside the database itself. And there's a Replication Performance Advisor package inside that database, and that's been integrated into the Web UI as well. So now you can actually get information out of the replication advisor package in Oracle directly from the UI without having to log into the database and try to run any database procedures to get it. We've also added the ability to support a per-PDB Extract. So in the past, when GoldenGate would run on a multitenant database, a multitenant database in Oracle, all the redo data from any pluggable database gets sent to that one redo stream. And so you would have to configure GoldenGate at the container or root level and it would be able to access anything at any PDB. Now, there's better security and better performance by doing what we call per-PDB Extract. And this means that for a single pluggable database, I can have an extract that runs at that database level that's going to capture information just from that pluggable database. 06:22 Lois And what about non-Oracle environments, Nick? Nick: We've also enhanced the non-Oracle environments as well. For example, in Postgres, we've added support for precise instantiation using Postgres snapshots. This eliminates the need to handle collisions when you're doing Postgres to Postgres replication and initial instantiation. On the GoldenGate for big data side, we've renamed that product more aptly to distributed applications in analytics, which is really what it does, and we've added a whole bunch of new features here too. The ability to move data into Databricks, doing Google Pub/Sub delivery. We now have support for XAG within the GoldenGate for distributed applications and analytics. What that means is that now you can follow all of our MAA best practices for GoldenGate for Oracle, but it also works for the DAA product as well, meaning that if it's running on one node of a cluster and that node fails, it'll restart itself on another node in the cluster. We've also added the ability to deliver data to Redis, Google BigQuery, stage and merge functionality for better performance into the BigQuery product. And then we've added a completely new feature, and this is something called streaming data and apps and we're calling it AsyncAPI and CloudEvent data streaming. It's a long name, but what that means is that we now have the ability to publish changes from a GoldenGate trail file out to end users. And so this allows through the Web UI or through the REST API, you can now come into GoldenGate and through the distributed applications and analytics product, actually set up a subscription to a GoldenGate trail file. And so this allows us to push data into messaging environments, or you can simply subscribe to changes and it doesn't have to be the whole trail file, it can just be a subset. You can specify exactly which tables and you can put filters on that. You can also set up your topologies as well. So, it's a really cool feature that we've added here. 08:26 Nikita: Ok, you've given us a lot of updates about what GoldenGate can support. But can we also get some specifics? Nick: So as far as what we have, on the Oracle Database side, there's a ton of different Oracle databases we support, including the Autonomous Databases and all the different flavors of them, your Oracle Database Appliance, your Base Database Service within OCI, your of course, Standard and Enterprise Edition, as well as all the different flavors of Exadata, are all supported with GoldenGate. This is all for capture and delivery. And this is all versions as well. GoldenGate supports Oracle 23ai and below. We also have a ton of non-Oracle databases in different Cloud stores. On an non-Oracle side, we support everything from application-specific databases like FairCom DB, all the way to more advanced applications like Snowflake, which there's a vast user base for that. We also support a lot of different cloud stores and these again, are non-Oracle, nonrelational systems, or they can be relational databases. We also support a lot of big data platforms and this is part of the distributed applications and analytics side of things where you have the ability to replicate to different Apache environments, different Cloudera environments. We also support a number of open-source systems, including things like Apache Cassandra, MySQL Community Edition, a lot of different Postgres open source databases along with MariaDB. And then we have a bunch of streaming event products, NoSQL data stores, and even Oracle applications that we support. So there's absolutely a ton of different environments that GoldenGate supports. There are additional Oracle databases that we support and this includes the Oracle Metadata Service, as well as Oracle MySQL, including MySQL HeatWave. Oracle also has Oracle NoSQL Spatial and Graph and times 10 products, which again are all supported by GoldenGate. 10:23 Lois: Wow, that's a lot of information! Nick: One of the things that we didn't really cover was the different SaaS applications, which we've got like Cerner, Fusion Cloud, Hospitality, Retail, MICROS, Oracle Transportation, JD Edwards, Siebel, and on and on and on. And again, because of the nature of GoldenGate, it's heterogeneous. Any source can talk to any target. And so it doesn't have to be, oh, I'm pulling from Oracle Fusion Cloud, that means I have to go to an Oracle Database on the target, not necessarily. 10:51 Lois: So, there's really a massive amount of flexibility built into the system. 11:00 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 11:26 Nikita: Welcome back! Now that we've gone through the base product, what other features or products are in the GoldenGate family itself, Nick? Nick: So we have quite a few. We've kind of touched already on GoldenGate for Oracle databases and non-Oracle databases. We also have something called GoldenGate for Mainframe, which right now is covered under the GoldenGate for non-Oracle, but there is a licensing difference there. So that's something to be aware of. We also have the OCI GoldenGate product. We are announcing and we have announced that OCI GoldenGate will also be made available as part of the Oracle Database@Azure and Oracle Database@ Google Cloud partnerships. And then you'll be able to use that vendor's cloud credits to actually pay for the OCI GoldenGate product. One of the cool things about this is it will have full feature parity with OCI GoldenGate running in OCI. So all the same features, all the same sources and targets, all the same topologies be able to migrate data in and out of those clouds at will, just like you do with OCI GoldenGate today running in OCI. We have Oracle GoldenGate Free. This is a completely free edition of GoldenGate to use. It is limited on the number of platforms that it supports as far as sources and targets and the size of the database. 12:45 Lois: But it's a great way for developers to really experience GoldenGate without worrying about a license, right? What's next, Nick? Nick: We have GoldenGate for Distributed Applications and Analytics, which was formerly called GoldenGate for big data, and that allows us to do all the streaming. That's also where the GoldenGate AsyncAPI integration is done. So in order to publish the GoldenGate trail files or allow people to subscribe to them, it would be covered under the Oracle GoldenGate Distributed Applications and Analytics license. We also have OCI GoldenGate Marketplace, which allows you to run essentially the on-premises version of GoldenGate but within OCI. So a little bit more flexibility there. It also has a hub architecture. So if you need that 99.99% availability, you can get it within the OCI Marketplace environment. We have GoldenGate for Oracle Enterprise Manager Cloud Control, which used to be called Oracle Enterprise Manager. And this allows you to use Enterprise Manager Cloud Control to get all the statistics and details about GoldenGate. So all the reporting information, all the analytics, all the statistics, how fast GoldenGate is replicating, what's the lag, what's the performance of each of the processes, how much data am I sending across a network. All that's available within the plug-in. We also have Oracle GoldenGate Veridata. This is a nice utility and tool that allows you to compare two databases, whether or not GoldenGate is running between them and actually tell you, hey, these two systems are out of sync. And if they are out of sync, it actually allows you to repair the data too. 14:25 Nikita: That's really valuable…. Nick: And it does this comparison without locking the source or the target tables. The other really cool thing about Veridata is it does this while there's data in flight. So let's say that the GoldenGate lag is 15 or 20 seconds and I want to compare this table that has 10 million rows in it. The Veridata product will go out, run its comparison once. Once that comparison is done the first time, it's then going to have a list of rows that are potentially out of sync. Well, some of those rows could have been moved over or could have been modified during that 10 to 15 second window. And so the next time you run Veridata, it's actually going to go through. It's going to check just those rows that were potentially out of sync to see if they're really out of sync or not. And if it comes back and says, hey, out of those potential rows, there's two out of sync, it'll actually produce a script that allows you to resynchronize those systems and repair them. So it's a very cool product. 15:19 Nikita: What about GoldenGate Stream Analytics? I know you mentioned it in the last episode, but in the context of this discussion, can you tell us a little more about it? Nick: This is the ability to essentially stream data from a GoldenGate trail file, and they do a real time analytics on it. And also things like geofencing or real-time series analysis of it. 15:40 Lois: Could you give us an example of this? Nick: If I'm working in tracking stock market information and stocks, it's not really that important on how much or how far down a stock goes. What's really important is how quickly did that stock rise or how quickly did that stock fall. And that's something that GoldenGate Stream Analytics product can do. Another thing that it's very valuable for is the geofencing. I can have an application on my phone and I can track where the user is based on that application and all that information goes into a database. I can then use the geofencing tool to say that, hey, if one of those users on that app gets within a certain distance of one of my brick-and-mortar stores, I can actually send them a push notification to say, hey, come on in and you can order your favorite drink just by clicking Yes, and we'll have it ready for you. And so there's a lot of things that you can do there to help upsell your customers and to get more revenue just through GoldenGate itself. And then we also have a GoldenGate Migration Utility, which allows customers to migrate from the classic architecture into the microservices architecture. 16:44 Nikita: Thanks Nick for that comprehensive overview. Lois: In our next episode, we'll have Nick back with us to talk about commonly used terminology and the GoldenGate architecture. And if you want to learn more about what we discussed today, visit mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Discover how Oracle APEX leverages OCI AI services to build smarter, more efficient applications. Hosts Lois Houston and Nikita Abraham interview APEX experts Chaitanya Koratamaddi, Apoorva Srinivas, and Toufiq Mohammed about how key services like OCI Vision, Oracle Digital Assistant, and Document Understanding integrate with Oracle APEX. Packed with real-world examples, this episode highlights all the ways you can enhance your APEX apps. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we looked at how generative AI powers Oracle APEX and in today's episode, we're going to focus on integrating APEX with OCI AI Services. Lois: That's right, Niki. We're going to look at how you can use Oracle AI services like OCI Vision, Oracle Digital Assistant, Document Understanding, OCI Generative AI, and more to enhance your APEX apps. 01:03 Nikita: And to help us with it all, we've got three amazing experts with us, Chaitanya Koratamaddi, Director of Product Management at Oracle, and senior product managers, Apoorva Srinivas and Toufiq Mohammed. In today's episode, we'll go through each Oracle AI service and look at how it interacts with APEX. Apoorva, let's start with you. Can you explain what the OCI Vision service is? Apoorva: Oracle Cloud Infrastructure Vision is a serverless multi-tenant service accessible using the console or REST APIs. You can upload images to detect and classify objects in them. With prebuilt models available, developers can quickly build image recognition into their applications without machine learning expertise. OCI Vision service provides a fully managed model infrastructure. With complete integration with OCI Data Labeling, you can build custom models easily. OCI Vision service provides pretrained models-- Image Classification, Object Detection, Face Detection, and Text Recognition. You can build custom models for Image Classification and Object Detection. 02:24 Lois: Ok. What about its use cases? How can OCI Vision make APEX apps more powerful? Apoorva: Using OCI Vision, you can make images and videos discoverable and searchable in your APEX app. You can use OCI Vision to detect and classify objects in the images. OCI Vision also highlights the objects using a red rectangular box. This comes in handy in use cases such as detecting vehicles that have violated the rules in traffic images. You can use OCI Vision to identify visual anomalies in your data. This is a very popular use case where you can detect anomalies in cancer X-ray images to detect cancer. These are some of the most popular use cases of using OCI Vision with your APEX app. But the possibilities are endless and you can use OCI Vision for any of your image analysis. 03:29 Nikita: Let's shift gears to Oracle Digital Assistant. Chaitanya, can you tell us what it's all about? Chaitanya: Oracle Digital Assistant is a low-code conversational AI platform that allows businesses to build and deploy AI assistants. It provides natural language understanding, automatic speech recognition, and text-to-speech capabilities to enable human-like interactions with customers and employees. Oracle Digital Assistant comes with prebuilt templates for you to get started. 04:00 Lois: What are its key features and benefits, Chaitanya? How does it enhance the user experience? Chaitanya: Oracle Digital Assistant provides conversational AI capabilities that include generative AI features, natural language understanding and ML, AI-powered voice, and analytics and insights. Integration with enterprise applications become easier with unified conversational experience, prebuilt chatbots for Oracle Cloud applications, and chatbot architecture frameworks. Oracle Digital Assistant provides advanced conversational design tools, conversational designer, dialogue and domain trainer, and native multilingual support. Oracle Digital Assistant is open, scalable, and secure. It provides multi-channel support, automated bot-to-agent transfer, and integrated authentication profile. 04:56 Nikita: And what about the architecture? What happens at the back end? Chaitanya: Developers assemble digital assistants from one or more skills. Skills can be based on prebuilt skills provided by Oracle or third parties, custom developed, or based on one of the many skill templates available. 05:16 Lois: Chaitanya, what exactly are “skills” within the Oracle Digital Assistant framework? Chaitanya: Skills are individual chatbots that are designed to interact with users and fulfill specific type of tasks. Each skill helps a user complete a task through a combination of text messages and simple UI elements like select list. When a user request is submitted through a channel, the Digital Assistant routes the user's request to the most appropriate skill to satisfy the user's request. Skills can combine multilingual NLP deep learning engine, a powerful dialogflow engine, and integration components to connect to back-end systems. Skills provide a modular way to build your chatbot functionality. Now users connect with a chatbot through channels such as Facebook, Microsoft Teams, or in our case, Oracle APEX chatbot, which is embedded into an APEX application. 06:21 Nikita: That's fascinating. So, what are some use cases of Oracle Digital Assistant in APEX apps? Chaitanya: Digital assistants streamline approval processes by collecting information, routing requests, and providing status updates. Digital assistants offer instant access to information and documentation, answering common questions and guiding users. Digital assistants assist sales teams by automating tasks, responding to inquiries, and guiding prospects through the sales funnel. Digital assistants facilitate procurement by managing orders, tracking deliveries, and handling supplier communication. Digital assistants simplify expense approvals by collecting reports, validating receipts, and routing them for managerial approval. Digital assistants manage inventory by tracking stock levels, reordering supplies, and providing real-time inventory updates. Digital assistants have become a common UX feature in any enterprise application. 07:28 Want to learn how to design stunning, responsive enterprise applications directly from your browser with minimal coding? The new Oracle APEX Developer Professional learning path and certification enables you to leverage AI-assisted development, including generative AI and Database 23ai, to build secure, scalable web and mobile applications with advanced AI-powered features. From now through May 15, 2025, we're waiving the certification exam fee (valued at $245). So, what are you waiting for? Visit mylearn.oracle.com to get started today. 08:09 Nikita: Welcome back! Thanks for that, Chaitanya. Toufiq, let's talk about the OCI Document Understanding service. What is it? Toufiq: Using this service, you can upload documents to extract text, tables, and other key data. This means the service can automatically identify and extract relevant information from various types of documents, such as invoices, receipts, contracts, etc. The service is serverless and multitenant, which means you don't need to manage any servers or infrastructure. You can access this service using the console, REST APIs, SDK, or CLI, giving you multiple ways to integrate. 08:55 Nikita: What do we use for APEX apps? Toufiq: For APEX applications, we will be using REST APIs to integrate the service. Additionally, you can process individual files or batches of documents using the ProcessorJob API endpoint. This flexibility allows you to handle different volumes of documents efficiently, whether you need to process a single document or thousands at once. With these capabilities, the OCI Document Understanding service can significantly streamline your document processing tasks, saving time and reducing the potential for manual errors. 09:36 Lois: Ok. What are the different types of models available? How do they cater to various business needs? Toufiq: Let us start with pre-trained models. These are ready-to-use models that come right out of the box, offering a range of functionalities. The available models are Optical Character Recognition (OCR) enables the service to extract text from documents, allowing you to digitize, scan the documents effortlessly. You can precisely extract text content from documents. Key-value extraction, useful in streamlining tasks like invoice processing. Table extraction can intelligently extract tabular data from documents. Document classification automatically categorizes documents based on their content. OCR PDF enables seamless extraction of text from PDF files. Now, what if your business needs go beyond these pre-trained models. That's where custom models come into play. You have the flexibility to train and build your own models on top of these foundational pre-trained models. Models available for training are key value extraction and document classification. 10:50 Nikita: What does the architecture look like for OCI Document Understanding? Toufiq: You can ingest or supply the input file in two different ways. You can upload the file to an OCI Object Storage location. And in your request, you can point the Document Understanding service to pick the file from this Object Storage location. Alternatively, you can upload a file directly from your computer. Once the file is uploaded, the Document Understanding service can process the file and extract key information using the pre-trained models. You can also customize models to tailor the extraction to your data or use case. After processing the file, the Document Understanding service stores the results in JSON format in the Object Storage output bucket. Your Oracle APEX application can then read the JSON file from the Object Storage output location, parse the JSON, and store useful information at local table or display it on the screen to the end user. 11:52 Lois: And what about use cases? How are various industries using this service? Toufiq: In financial services, you can utilize Document Understanding to extract data from financial statements, classify and categorize transactions, identify and extract payment details, streamline tax document management. Under manufacturing, you can perform text extraction from shipping labels and bill of lading documents, extract data from production reports, identify and extract vendor details. In the healthcare industry, you can automatically process medical claims, extract patient information from forms, classify and categorize medical records, identify and extract diagnostic codes. This is not an exhaustive list, but provides insights into some industry-specific use cases for Document Understanding. 12:50 Nikita: Toufiq, let's switch to the big topic everyone's excited about—the OCI Generative AI Service. What exactly is it? Toufiq: OCI Generative AI is a fully managed service that provides a set of state of the art, customizable large language models that cover a wide range of use cases. It provides enterprise grade generative AI with data governance and security, which means only you have access to your data and custom-trained models. OCI Generative AI provides pre-trained out-of-the-box LLMs for text generation, summarization, and text embedding. OCI Generative AI also provides necessary tools and infrastructure to define models with your own business knowledge. 13:37 Lois: Generally speaking, how is OCI Generative AI useful? Toufiq: It supports various large language models. New models available from Meta and Cohere include Llama2 developed by Meta, and Cohere's Command model, their flagship text generation model. Additionally, Cohere offers the Summarize model, which provides high-quality summaries, accurately capturing essential information from documents, and the Embed model, converting text to vector embeddings representation. OCI Generative AI also offers dedicated AI clusters, enabling you to host foundational models on private GPUs. It integrates LangChain and open-source framework for developing new interfaces for generative AI applications powered by language models. Moreover, OCI Generative AI facilitates generative AI operations, providing content moderation controls, zero downtime endpoint model swaps, and endpoint deactivation and activation capabilities. For each model endpoint, OCI Generative AI captures a series of analytics, including call statistics, tokens processed, and error counts. 14:58 Nikita: What about the architecture? How does it handle user input? Toufiq: Users can input natural language, input/output examples, and instructions. The LLM analyzes the text and can generate, summarize, transform, extract information, or classify text according to the user's request. The response is sent back to the user in the specified format, which can include raw text or formatting like bullets and numbering, etc. 15:30 Lois: Can you share some practical use cases for generative AI in APEX apps? Toufiq: Some of the OCI generative AI use cases for your Oracle APEX apps include text summarization. Generative AI can quickly summarize lengthy documents such as articles, transcripts, doctor's notes, and internal documents. Businesses can utilize generative AI to draft marketing copy, emails, blog posts, and product descriptions efficiently. Generative AI-powered chatbots are capable of brainstorming, problem solving, and answering questions. With generative AI, content can be rewritten in different styles or languages. This is particularly useful for localization efforts and catering to diverse audience. Generative AI can classify intent in customer chat logs, support tickets, and more. This helps businesses understand customer needs better and provide tailored responses and solutions. By searching call transcripts, internal knowledge sources, Generative AI enables businesses to efficiently answer user queries. This enhances information retrieval and decision-making processes. 16:47 Lois: Before we let you go, can you explain what Select AI is? How is it different from the other AI services? Toufiq: Select AI is a feature of Autonomous Database. This is where Select AI differs from the other AI services. Be it OCI Vision, Document Understanding, or OCI Generative AI, these are all freely managed standalone services on Oracle Cloud, accessible via REST APIs. Whereas Select AI is a feature available in Autonomous Database. That means to use Select AI, you need Autonomous Database. 17:26 Nikita: And what can developers do with Select AI? Toufiq: Traditionally, SQL is the language used to query the data in the database. With Select AI, you can talk to the database and get insights from the data in the database using human language. At the very basic, what Select AI does is it generates SQL queries using natural language, like an NL2SQL capability. 17:52 Nikita: How does it actually do that? Toufiq: When a user asks a question, the first step Select AI does is look into the AI profile, which you, as a developer, define. The AI profile holds crucial information, such as table names, the LLM provider, and the credentials needed to authenticate with the LLM service. Next, Select AI constructs a prompt. This prompt includes information from the AI profile and the user's question. Essentially, it's a packet of information containing everything the LLM service needs to generate SQL. The next step is generating SQL using LLM. The prompt prepared by Select AI is sent to the available LLM services via REST. Which LLM to use is configured in the AI profile. The supported providers are OpenAI, Cohere, Azure OpenAI, and OCI Generative AI. Once the SQL is generated by the LLM service, it is returned to the application. The app can then handle the SQL query in various ways, such as displaying the SQL results in a report format or as charts, etc. 19:05 Lois: This has been an incredible discussion! Thank you, Chaitanya, Apoorva, and Toufiq, for walking us through all of these amazing AI tools. If you're ready to dive deeper, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. You'll find step-by-step guides and demos for everything we covered today. Nikita: Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:31 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
We are happy to announce that there will be a dedicated MCP track at the 2025 AI Engineer World's Fair, taking place Jun 3rd to 5th in San Francisco, where the MCP core team and major contributors and builders will be meeting. Join us and apply to speak or sponsor!When we first wrote Why MCP Won, we had no idea how quickly it was about to win.In the past 4 weeks, OpenAI and now Google have now announced the MCP support, effectively confirming our prediction that MCP was the presumptive winner of the agent standard wars. MCP has now overtaken OpenAPI, the incumbent option and most direct alternative, in GitHub stars (3 months ahead of conservative trendline):We have explored the state of MCP at AIE (now the first ever >100k views workshop):And since then, we've added a 7th reason why MCP won - this team acts very quickly on feedback, with the 2025-03-26 spec update adding support for stateless/resumable/streamable HTTP transports, and comprehensive authz capabilities based on OAuth 2.1.This bodes very well for the future of the community and project. For protocol and history nerds, we also asked David and Justin to tell the origin story of MCP, which we leave to the reader to enjoy (you can also skim the transcripts, or, the changelogs of a certain favored IDE). It's incredible the impact that individual engineers solving their own problems can have on an entire industry.Full video episodeLike and subscribe on YouTube!Show Links* David* Justin* MCP* Why MCP WonTimestamps* 00:00 Introduction and Guest Welcome* 00:37 What is MCP?* 02:00 The Origin Story of MCP* 05:18 Development Challenges and Solutions* 08:06 Technical Details and Inspirations* 29:45 MCP vs Open API* 32:48 Building MCP Servers* 40:39 Exploring Model Independence in LLMs* 41:36 Building Richer Systems with MCP* 43:13 Understanding Agents in MCP* 45:45 Nesting and Tool Confusion in MCP* 49:11 Client Control and Tool Invocation* 52:08 Authorization and Trust in MCP Servers* 01:01:34 Future Roadmap and Stateless Servers* 01:10:07 Open Source Governance and Community Involvement* 01:18:12 Wishlist and Closing RemarksTranscriptAlessio [00:00:02]: Hey, everyone. Welcome back to Latent Space. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:10]: Hey, morning. And today we have a remote recording, I guess, with David and Justin from Anthropic over in London. Welcome. Hey, good You guys have created a storm of hype because of MCP, and I'm really glad to have you on. Thanks for making the time. What is MCP? Let's start with a crisp what definition from the horse's mouth, and then we'll go into the origin story. But let's start off right off the bat. What is MCP?Justin/David [00:00:43]: Yeah, sure. So Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins, basically. The terminology is a bit different. We use this client-server terminology, and we can talk about why that is and where that came from. But at the end of the day, it really is that. It's like extending and enhancing the functionality of AI application.swyx [00:01:05]: David, would you add anything?Justin/David [00:01:07]: Yeah, I think that's actually a good description. I think there's like a lot of different ways for how people are trying to explain it. But at the core, I think what Justin said is like extending AI applications is really what this is about. And I think the interesting bit here that I want to highlight, it's AI applications and not models themselves that this is focused on. That's a common misconception that we can talk about a bit later. But yeah. Another version that we've used and gotten to like is like MCP is kind of like the USB-C port of AI applications and that it's meant to be this universal connector to a whole ecosystem of things.swyx [00:01:44]: Yeah. Specifically, an interesting feature is, like you said, the client and server. And it's a sort of two-way, right? Like in the same way that said a USB-C is two-way, which could be super interesting. Yeah, let's go into a little bit of the origin story. There's many people who've tried to make statistics. There's many people who've tried to build open source. I think there's an overall, also, my sense is that Anthropic is going hard after developers in the way that other labs are not. And so I'm also curious if there was any external influence or was it just you two guys just in a room somewhere riffing?Justin/David [00:02:18]: It is actually mostly like us two guys in a room riffing. So this is not part of a big strategy. You know, if you roll back time a little bit and go into like July 2024. I was like, started. I started at Anthropic like three months earlier or two months earlier. And I was mostly working on internal developer tooling, which is what I've been doing for like years and years before. And as part of that, I think there was an effort of like, how do I empower more like employees at Anthropic to use, you know, to integrate really deeply with the models we have? Because we've seen these, like, how good it is, how amazing it will become even in the future. And of course, you know, just dogfoot your own model as much as you can. And as part of that. From my development tooling background, I quickly got frustrated by the idea that, you know, on one hand side, I have Cloud Desktop, which is this amazing tool with artifacts, which I really enjoyed. But it was very limited to exactly that feature set. And it was there was no way to extend it. And on the other hand side, I like work in IDEs, which could greatly like act on like the file system and a bunch of other things. But then they don't have artifacts or something like that. And so what I constantly did was just copy. Things back and forth on between Cloud Desktop and the IDE, and that quickly got me, honestly, just very frustrated. And part of that frustration wasn't like, how do I go and fix this? What, what do we need? And back to like this development developer, like focus that I have, I really thought about like, well, I know how to build all these integrations, but what do I need to do to let these applications let me do this? And so it's very quickly that you see that this is clearly like an M times N problem. Like you have multiple like applications. And multiple integrations you want to build and like, what that is better there to fix this than using a protocol. And at the same time, I was actually working on an LSP related thing internally that didn't go anywhere. But you put these things together in someone's brain and let them wait for like a few weeks. And out of that comes like the idea of like, let's build some, some protocol. And so back to like this little room, like it was literally just me going to a room with Justin and go like, I think we should build something like this. Uh, this is a good idea. And Justin. Lucky for me, just really took an interest in the idea, um, and, and took it from there to like, to, to build something, together with me, that's really the inception story is like, it's us to, from then on, just going and building it over, over the course of like, like a month and a half of like building the protocol, building the first integration, like Justin did a lot of the, like the heavy lifting of the first integrations in cloud desktop. I did a lot of the first, um, proof of concept of how this can look like in an IDE. And if you, we could talk about like some of. All the tidbits you can find way before the inception of like before the official release, if you were looking at the right repositories at the right time, but there you go. That's like some of the, the rough story.Alessio [00:05:12]: Uh, what was the timeline when, I know November 25th was like the official announcement date. When did you guys start working on it?Justin/David [00:05:19]: Justin, when did we start working on that? I think it, I think it was around July. I think, yeah, I, as soon as David pitched this initial idea, I got excited pretty quickly and we started working on it, I think. I think almost immediately after that conversation and then, I don't know, it was a couple, maybe a few months of, uh, building the really unrewarding bits, if we're being honest, because for, for establishing something that's like this communication protocol has clients and servers and like SDKs everywhere, there's just like a lot of like laying the groundwork that you have to do. So it was a pretty, uh, that was a pretty slow couple of months. But then afterward, once you get some things talking over that wire, it really starts to get exciting and you can start building. All sorts of crazy things. And I think this really came to a head. And I don't remember exactly when it was, maybe like approximately a month before release, there was an internal hackathon where some folks really got excited about MCP and started building all sorts of crazy applications. I think the coolest one of which was like an MCP server that can control a 3d printer or something. And so like, suddenly people are feeling this power of like cloud connecting to the outside world in a really tangible way. And that, that really added some, uh, some juice to us and to the release.Alessio [00:06:32]: Yeah. And we'll go into the technical details, but I just want to wrap up here. You mentioned you could have seen some things coming if you were looking in the right places. We always want to know what are the places to get alpha, how, how, how to find MCP early.Justin/David [00:06:44]: I'm a big Zed user. I liked the Zed editor. The first MCP implementation on an IDE was in Zed. It was written by me and it was there like a month and a half before the official release. Just because we needed to do it in the open because it's an open source project. Um, and so it was, it was not, it was named slightly differently because we. We were not set on the name yet, but it was there.swyx [00:07:05]: I'm happy to go a little bit. Anthropic also had some preview of a model with Zed, right? Some kind of fast editing, uh, model. Um, uh, I, I'm con I confess, you know, I'm a cursor windsurf user. Haven't tried Zed. Uh, what's, what's your, you know, unrelated or, you know, unsolicited two second pitch for, for Zed. That's a good question.Justin/David [00:07:28]: I, it really depends what you value in editors. For me. I, I wouldn't even say I like, I love Zed more than others. I like them all like complimentary in, in a way or another, like I do use windsurf. I do use Zed. Um, but I think my, my main pitch for Zed is low latency, super smooth experience editor with a decent enough AI integration.swyx [00:07:51]: I mean, and maybe, you know, I think that's, that's all it is for a lot of people. Uh, I think a lot of people obviously very tied to the VS code paradigm and the extensions that come along with it. Okay. So I wanted to go back a little bit. You know, on, on, on some of the things that you mentioned, Justin, uh, which was building MCP on paper, you know, obviously we only see the end result. It just seems inspired by LSP. And I, I think both of you have acknowledged that. So how much is there to build? And when you say build, is it a lot of code or a lot of design? Cause I felt like it's a lot of design, right? Like you're picking JSON RPC, like how much did you base off of LSP and, and, you know, what, what, what was the sort of hard, hard parts?Justin/David [00:08:29]: Yeah, absolutely. I mean, uh, we, we definitely did take heavy inspiration from LSP. David had much more prior experience with it than I did working on developer tools. So, you know, I've mostly worked on products or, or sort of infrastructural things. LSP was new to me. But as a, as a, like, or from design principles, it really makes a ton of sense because it does solve this M times N problem that David referred to where, you know, in the world before LSP, you had all these different IDEs and editors, and then all these different languages that each wants to support or that their users want them to support. And then everyone's just building like one. And so, like, you use Vim and you might have really great support for, like, honestly, I don't know, C or something, and then, like, you switch over to JetBrains and you have the Java support, but then, like, you don't get to use the great JetBrains Java support in Vim and you don't get to use the great C support in JetBrains or something like that. So LSP largely, I think, solved this problem by creating this common language that they could all speak and that, you know, you can have some people focus on really robust language server implementations, and then the IDE developers can really focus on that side. And they both benefit. So that was, like, our key takeaway for MCP is, like, that same principle and that same problem in the space of AI applications and extensions to AI applications. But in terms of, like, concrete particulars, I mean, we did take JSON RPC and we took this idea of bidirectionality, but I think we quickly took it down a different route after that. I guess there is one other principle from LSP that we try to stick to today, which is, like, this focus on how features manifest. More than. The semantics of things, if that makes sense. David refers to it as being presentation focused, where, like, basically thinking and, like, offering different primitives, not because necessarily the semantics of them are very different, but because you want them to show up in the application differently. Like, that was a key sort of insight about how LSP was developed. And that's also something we try to apply to MCP. But like I said, then from there, like, yeah, we spent a lot of time, really a lot of time, and we could go into this more separately, like, thinking about each of the primitives that we want to offer in MCP. And why they should be different, like, why we want to have all these different concepts. That was a significant amount of work. That was the design work, as you allude to. But then also already out of the gate, we had three different languages that we wanted to at least support to some degree. That was TypeScript, Python, and then for the Z integration, it was Rust. So there was some SDK building work in those languages, a mixture of clients and servers to build out to try to create this, like, internal ecosystem that we could start playing with. And then, yeah, I guess just trying to make everything, like, robust over, like, I don't know, this whole, like, concept that we have for local MCP, where you, like, launch subprocesses and stuff and making that robust took some time as well. Yeah, maybe adding to that, I think the LSP inference goes even a little bit further. Like, we did take actually quite a look at criticisms on LSP, like, things that LSP didn't do right and things that people felt they would love to have different and really took that to heart to, like, see, you know, what are some of the things. that we wish, you know, we should do better. We took a, you know, like, a lengthy, like, look at, like, their very unique approach to JSON RPC, I may say, and then we decided that this is not what we do. And so there's, like, these differences, but it's clearly very, very inspired. Because I think when you're trying to build and focus, if you're trying to build something like MCP, you kind of want to pick the areas you want to innovate in, but you kind of want to be boring about the other parts in pattern matching LSP. So the problem allows you to be boring in a lot of the core pieces that you want to be boring in. Like, the choice of JSON RPC is very non-controversial to us because it's just, like, it doesn't matter at all, like, what the action, like, bites on the bar that you're speaking. It makes no difference to us. The innovation is on the primitives you choose and these type of things. And so there's way more focus on that that we wanted to do. So having some prior art is good there, basically.swyx [00:12:26]: It does. I wanted to double click. I mean, there's so many things you can go into. Obviously, I am passionate about protocol design. I wanted to show you guys this. I mean, I think you guys know, but, you know, you already referred to the M times N problem. And I can just share my screen here about anyone working in developer tools has faced this exact issue where you see the God box, basically. Like, the fundamental problem and solution of all infrastructure engineering is you have things going to N things, and then you put the God box and they'll all be better, right? So here is one problem for Uber. One problem for... GraphQL, one problem for Temporal, where I used to work at, and this is from React. And I was just kind of curious, like, you know, did you solve N times N problems at Facebook? Like, it sounds like, David, you did that for a living, right? Like, this is just N times N for a living.Justin/David [00:13:16]: David Pérez- Yeah, yeah. To some degree, for sure. I did. God, what a good example of this, but like, I did a bunch of this kind of work on like source control systems and these type of things. And so there were a bunch of these type of problems. And so you just shove them into something that everyone can read from and everyone can write to, and you build your God box somewhere, and it works. But yeah, it's just in developer tooling, you're absolutely right. In developer tooling, this is everywhere, right?swyx [00:13:47]: And that, you know, it shows up everywhere. And what was interesting is I think everyone who makes the God box then has the same set of problems, which is also you now have like composability off and remotes versus local. So, you know, there's this very common set of problems. So I kind of want to take a meta lesson on how to do the God box, but, you know, we can talk about the sort of development stuff later. I wanted to double click on, again, the presentation that Justin mentioned of like how features manifest and how you said some things are the same, but you just want to reify some concepts so they show up differently. And I had that sense, you know, when I was looking at the MCP docs, I'm like, why do these two things need to be the difference in other? I think a lot of people treat tool calling as the solution to everything, right? And sometimes you can actually sort of view kinds of different kinds of tool calls as different things. And sometimes they're resources. Sometimes they're actually taking actions. Sometimes they're something else that I don't really know yet. But I just want to see, like, what are some things that you sort of mentally group as adjacent concepts and why were they important to you to emphasize?Justin/David [00:14:58]: Yeah, I can chat about this a bit. I think fundamentally we every sort of primitive that we thought through, we thought from the perspective of the application developer first, like if I'm building an application, whether it is an IDE or, you know, call a desktop or some agent interface or whatever the case may be, what are the different things that I would want to receive from like an integration? And I think once you take that lens, it becomes quite clear that that tool calling is necessary, but very insufficient. Like there are many other things you would want to do besides just get tools. And plug them into the model and you want to have some way of differentiating what those different things are. So the kind of core primitives that we started MCP with, we've since added a couple more, but the core ones are really tools, which we've already talked about. It's like adding, adding tools directly to the model or function calling is sometimes called resources, which is basically like bits of data or context that you might want to add to the context. So excuse me, to the, to the model context. And this, this is the first primitive where it's like, we, we. Decided this could be like application controlled, like maybe you want a model to automatically search through and, and find relevant resources and bring them into context. But maybe you also want that to be an explicit UI affordance in the application where the user can like, you know, pick through a dropdown or like a paperclip menu or whatever, and find specific things and tag them in. And then that becomes part of like their message to the LLM. Like those are both use cases for resources. And then the third one is prompts. Which are deliberately meant to be like user initiated or. Like. User substituted. Text or messages. So like the analogy here would be like, if you're an editor, like a slash command or something like that, or like an at, you know, auto completion type thing where it's like, I have this kind of macro effectively that I want to drop in and use. And we have sort of expressed opinions through MCP about the different ways that these things could manifest, but ultimately it is for application developers to decide, okay, you, you get these different concepts expressed differently. Um, and it's very useful as an application developer because you can decide. The appropriate experience for each, and actually this can be a point of differentiation to, like, we were also thinking, you know, from the application developer perspective, they, you know, application developers don't want to be commoditized. They don't want the application to end up the same as every other AI application. So like, what are the unique things that they could do to like create the best user experience even while connecting up to this big open ecosystem of integration? I, yeah. And I think to add to that, the, I think there are two, two aspects to that, that I want to. I want to mention the first one is that interestingly enough, like while nowadays tool calling is obviously like probably like 95% plus of the integrations, and I wish there would be, you know, more clients doing tool resources, doing prompts. The, the very first implementation in that is actually a prompt implementation. It doesn't deal with tools. And, and it, we found this actually quite useful because what it allows you to do is, for example, build an MCP server that takes like a backtrack. So it's, it's not necessarily like a tool that literally just like rawizes from Sentry or any other like online platform that, that tracks your, your crashes. And just lets you pull this into the context window beforehand. And so it's quite nice that way that it's like a user driven interaction that you does the user decide when to pull this in and don't have to wait for the model to do it. And so it's a great way to craft the prompt in a way. And I think similarly, you know, I wish, you know, more MCP servers today would bring prompts as examples of, like how to even use the tools. Yeah. at the same time. The resources bits are quite interesting as well. And I wish we would see more usage there because it's very easy to envision, but yet nobody has really implemented it. A system where like an MCP server exposes, you know, a set of documents that you have, your database, whatever you might want to as a set of resources. And then like a client application would build a full rack index around this, right? This is definitely an application use case we had in mind as to why these are exposed in such a way that they're not model driven, because you might want to have way more resource content than is, you know, realistically usable in a context window. And so I think, you know, I wish applications and I hope applications will do this in the next few months, use these primitives, you know, way better, because I think there's way more rich experiences to be created that way. Yeah, completely agree with that. And I would also add that I would go into it if I haven't.Alessio [00:19:30]: I think that's a great point. And everybody just, you know, has a hammer and wants to do tool calling on everything. I think a lot of people do tool calling to do a database query. They don't use resources for it. What are like the, I guess, maybe like pros and cons or like when people should use a tool versus a resource, especially when it comes to like things that do have an API interface, like for a database, you can do a tool that does a SQL query versus when should you do that or a resource instead with the data? Yeah.Justin/David [00:20:00]: The way we separate these is like tools are always meant to be initiated by the model. It's sort of like at the model's discretion that it will like find the right tool and apply it. So if that's the interaction you want as a server developer, where it's like, okay, this, you know, suddenly I've given the LLM the ability to run a SQL queries, for example, that makes sense as a tool. But resources are more flexible, basically. And I think, to be completely honest, the story here is practically a bit complicated today. Because many clients don't support resources yet. But like, I think in an ideal world where all these concepts are fully realized, and there's like full ecosystem support, you would do resources for things like the schemas of your database tables and stuff like that, as a way to like either allow the user to say like, okay, now, you know, cloud, I want to talk to you about this database table. Here it is. Let's have this conversation. Or maybe the particular AI application that you're using, like, you know, could be something agentic, like cloud code. is able to just like agentically look up resources and find the right schema of the database table you're talking about, like both those interactions are possible. But I think like, anytime you have this sort of like, you want to list a bunch of entities, and then read any of them, that makes sense to model as resources. Resources are also, they're uniquely identified by a URI, always. And so you can also think of them as like, you know, sort of general purpose transformers, even like, if you want to support an interaction where a user just like drops a URI in, and then you like automatically figure out how to interpret that, you could use MCP servers to do that interpretation. One of the interesting side notes here, back to the Z example of resources, is that has like a prompt library that you can do, that people can interact with. And we just exposed a set of default prompts that we want everyone to have as part of that prompt library. Yeah, resources for a while so that like, you boot up Zed and Zed will just populate the prompt library from an MCP server, which was quite a cool interaction. And that was, again, a very specific, like, both sides needed to agree upon the URI format and the underlying data format. And but that was a nice and kind of like neat little application of resources. There's also going back to that perspective of like, as an application developer, what are the things that I would want? Yeah. We also applied this thinking to like, you know, like, we can do this, we can do this, we can do this, we can do this. Like what existing features of applications could conceivably be kind of like factored out into MCP servers if you were to take that approach today. And so like basically any IDE where you have like an attachment menu that I think naturally models as resources. It's just, you know, those implementations already existed.swyx [00:22:49]: Yeah, I think the immediate like, you know, when you introduced it for cloud desktop and I saw the at sign there, I was like, oh, yeah, that's what Cursor has. But this is for everyone else. And, you know, I think like that that is a really good design target because it's something that already exists and people can map on pretty neatly. I was actually featuring this chart from Mahesh's workshop that presumably you guys agreed on. I think this is so useful that it should be on the front page of the docs. Like probably should be. I think that's a good suggestion.Justin/David [00:23:19]: Do you want to do you want to do a PR for this? I love it.swyx [00:23:21]: Yeah, do a PR. I've done a PR for just Mahesh's workshop in general, just because I'm like, you know. I know.SPEAKER_03 [00:23:28]: I approve. Yeah.swyx [00:23:30]: Thank you. Yeah. I mean, like, but, you know, I think for me as a developer relations person, I always insist on having a map for people. Here are all the main things you have to understand. We'll spend the next two hours going through this. So some one image that kind of covers all this, I think is pretty helpful. And I like your emphasis on prompts. I would say that it's interesting that like I think, you know, in the earliest early days of like chat GPT and cloud, people. Often came up with, oh, you can't really follow my screen, can you? In the early days of chat of, of chat, GPT and all that, like a lot, a lot of people started like, you know, GitHub for prompts, like we'll do prop manager libraries and, and like those never really took off. And I think something like this is helpful and important. I would say like, I've also seen prompt file from human loop, I think, as, as other ways to standardize how people share prompts. But yeah, I agree that like, there should be. There should be more innovation here. And I think probably people want some dynamicism, which I think you, you afford, you allow for. And I like that you have multi-step that this was, this is the main thing that got me like, like these guys really get it. You know, I think you, you maybe have a published some research that says like, actually sometimes to get, to get the model working the right way, you have to do multi-step prompting or jailbreaking to, to, to behave the way that you want. And so I think prompts are not just single conversations. They're sometimes chains of conversations. Yeah.Alessio [00:25:05]: Another question that I had when I was looking at some server implementations, the server builders kind of decide what data gets eventually returned, especially for tool calls. For example, the Google maps one, right? If you just look through it, they decide what, you know, attributes kind of get returned and the user can not override that if there's a missing one. That has always been my gripe with like SDKs in general, when people build like API wrapper SDKs. And then they miss one parameter that maybe it's new and then I can not use it. How do you guys think about that? And like, yeah, how much should the user be able to intervene in that versus just letting the server designer do all the work?Justin/David [00:25:41]: I think we probably bear responsibility for the Google maps one, because I think that's one of the reference servers we've released. I mean, in general, for things like for tool results in particular, we've actually made the deliberate decision, at least thus far, for tool results to be not like sort of structured JSON data, not matching a schema, really, but as like a text or images or basically like messages that you would pass into the LLM directly. And so I guess the correlation that is, you really should just return a whole jumble of data and trust the LLM to like sort through it and see. I mean, I think we've clearly done a lot of work. But I think we really need to be able to shift and like, you know, extract the information it cares about, because that's what that's exactly what they excel at. And we really try to think about like, yeah, how to, you know, use LLMs to their full potential and not maybe over specify and then end up with something that doesn't scale as LLMs themselves get better and better. So really, yeah, I suppose what should be happening in this example server, which again, will request welcome. It would be great. It's like if all these result types were literally just passed through from the API that it's calling, and then the API would be able to pass through automatically.Alessio [00:26:50]: Thank you for joining us.Alessio [00:27:19]: It's a hard to sign decisions on where to draw the line.Justin/David [00:27:22]: I'll maybe throw AI under the bus a little bit here and just say that Claude wrote a lot of these example servers. No surprise at all. But I do think, sorry, I do think there's an interesting point in this that I do think people at the moment still to mostly still just apply their normal software engineering API approaches to this. And I think we're still need a little bit more relearning of how to build something for LLMs and trust them, particularly, you know, as they are getting significantly better year to year. Right. And I think, you know, two years ago, maybe that approach would have been very valid. But nowadays, just like just throw data at that thing that is really good at dealing with data is a good approach to this problem. And I think it's just like unlearning like 20, 30, 40 years of software engineering practices that go a little bit into this to some degree. If I could add to that real quickly, just one framing as well for MCP is thinking in terms of like how crazily fast AI is advancing. I mean, it's exciting. It's also scary. Like thinking, us thinking that like the biggest bottleneck to, you know, the next wave of capabilities for models might actually be their ability to like interact with the outside world to like, you know, read data from outside data sources or like take stateful actions. Working at Anthropic, we absolutely care about doing that. Safely and with the right control and alignment measures in place and everything. But also as AI gets better, people will want that. That'll be key to like becoming productive with AI is like being able to connect them up to all those things. So MCP is also sort of like a bet on the future and where this is all going and how important that will be.Alessio [00:29:05]: Yeah. Yeah, I would say any API attribute that says formatted underscore should kind of be gone and we should just get the raw data from all of them. Because why, you know, why are you formatting? For me, the, the model is definitely smart enough to format an address. So I think that should go to the end user.swyx [00:29:23]: Yeah. I have, I think Alessio is about to move on to like server implementation. I wanted to, I think we were talking, we're still talking about sort of MCP design and goals and intentions. And we've, I think we've indirectly identified like some problems that MCP is really trying to address. But I wanted to give you the spot to directly take on MCP versus open API, because I think obviously there's a, this is a top question. I wanted to sort of recap everything we just talked about and give people a nice little segment that, that people can say, say, like, this is a definitive answer on MCP versus open API.Justin/David [00:29:56]: Yeah, I think fundamentally, I mean, open API specifications are a very great tool. And like I've used them a lot in developing APIs and consumers of APIs. I think fundamentally, or we think that they're just like too granular for what you want to do with LLMs. Like they don't express higher level AI specific concepts like this whole mental model. Yeah. But we've talked about with the primitives of MCP and thinking from the perspective of the application developer, like you don't get any of that when you encode this information into an open API specification. So we believe that models will benefit more from like the purpose built or purpose design tools, resources, prompts, and the other primitives than just kind of like, here's our REST API, go wild. I do think there, there's another aspect. I think that I'm not an open API expert, so I might, everything might not be perfectly accurate. But I do think that we're... Like there's been, and we can talk about this a bit more later. There's a deliberate design decision to make the protocol somewhat stateful because we do really believe that AI applications and AI like interactions will become inherently more stateful and that we're the current state of like, like need for statelessness is more a temporary point in time that will, you know, to some degree that will always exist. But I think like more statefulness will become increasingly more popular, particularly when you think about additional modalities that go beyond just pure text-based, you know, interactions with models, like it might be like video, audio, whatever other modalities exist and out there already. And so I do think that like having something a bit more stateful is just inherently useful in this interaction pattern. I do think they're actually more complimentary open API and MCP than if people wanted to make it out. Like people look. For these, like, you know, A versus B and like, you know, have, have all the, all the developers of these things go in a room and fist fight it out. But that's rarely what's going on. I think it's actually, they're very complimentary and they have their little space where they're very, very strong. And I think, you know, just use the best tool for the job. And if you want to have a rich interaction between an AI application, it's probably like, it's probably MCP. That's the right choice. And if, if you want to have like an API spec somewhere that is very easy and like a model can read. And to interpret, and that's what, what worked for you, then open API is the way to go. One more thing to add here is that we've already seen people, I mean, this happened very early. People in the community built like bridges between the two as well. So like, if what you have is an open API specification and no one's, you know, building a custom MCP server for it, there are already like translators that will take that and re-expose it as MCP. And you could do the other direction too. Awesome.Alessio [00:32:43]: Yeah. I think there's the other side of MCPs that people don't talk as much. Okay. I think there's the other side of MCPs that people don't talk as much about because it doesn't go viral, which is building the servers. So I think everybody does the tweets about like connect the cloud desktop to XMCP. It's amazing. How would you guys suggest people start with building servers? I think the spec is like, so there's so many things you can do that. It's almost like, how do you draw the line between being very descriptive as a server developer versus like going back to our discussion before, like just take the data and then let them auto manipulate it later. Do you have any suggestions for people?Justin/David [00:33:16]: I. I think there, I have a few suggestions. I think that one of the best things I think about MCP and something that we got right very early is that it's just very, very easy to build like something very simple that might not be amazing, but it's pretty, it's good enough because models are very good and get this going within like half an hour, you know? And so I think that the best part is just like pick the language of, you know, of your choice that you love the most, pick the SDK for it, if there's an SDK for it, and then just go build a tool of the thing that matters to you personally. And that you want to use. You want to see the model like interact with, build the server, throw the tool in, don't even worry too much about the description just yet, like do a bit of like, write your little description as you think about it and just give it to the model and just throw it to standard IO protocol transport wise into like an application that you like and see it do things. And I think that's part of the magic that, or like, you know, empowerment and magic for developers to get so quickly to something that the model does. Or something that you care about. That I think really gets you going and gets you into this flow of like, okay, I see this thing can do cool things. Now I go and, and can expand on this and now I can go and like really think about like, which are the different tools I want, which are the different raw resources and prompts I want. Okay. Now that I have that. Okay. Now do I, what do my evals look like for how I want this to go? How do I optimize my prompts for the evals using like tools like that? This is infinite depth so that you can do. But. Okay. Just start. As simple as possible and just go build a server in like half an hour in the language of your choice and how the model interacts with the things that matter to you. And I think that's where the fun is at. And I think people, I think a lot of what MCP makes great is it just adds a lot of fun to the development piece to just go and have models do things quickly. I also, I'm quite partial, again, to using AI to help me do the coding. Like, I think even during the initial development process, we realized it was quite easy to basically just take all the SDK code. Again, you know, what David suggested, like, you know, pick the language you care about, and then pick the SDK. And once you have that, you can literally just drop the whole SDK code into an LLM's context window and say, okay, now that you know MCP, build me a server that does that. This, this, this. And like, the results, I think, are astounding. Like, I mean, it might not be perfect around every single corner or whatever. And you can refine it over time. But like, it's a great way to kind of like one shot something that basically does what you want, and then you can iterate from there. And like David said, there has been a big emphasis from the beginning on like making servers as easy and simple to build as possible, which certainly helps with LLMs doing it too. We often find that like, getting started is like, you know, 100, 200 lines of code in the last couple of years. It's really quite easy. Yeah. And if you don't have an SDK, again, give the like, give the subset of the spec that you care about to the model, and like another SDK and just have it build you an SDK. And it usually works for like, that subset. Building a full SDK is a different story. But like, to get a model to tool call in Haskell or whatever, like language you like, it's probably pretty straightforward.swyx [00:36:32]: Yeah. Sorry.Alessio [00:36:34]: No, I was gonna say, I co-hosted a hackathon at the AGI house. I'm a personal agent, and one of the personal agents somebody built was like an MCP server builder agent, where they will basically put the URL of the API spec, and it will build an MCP server for them. Do you see that today as kind of like, yeah, most servers are just kind of like a layer on top of an existing API without too much opinion? And how, yeah, do you think that's kind of like how it's going to be going forward? Just like AI generated, exposed to API that already exists? Or are we going to see kind of like net new MCP experiences that you... You couldn't do before?Justin/David [00:37:10]: I think, go for it. I think both, like, I, I think there, there will always be value in like, oh, I have, you know, I have my data over here, and I want to use some connector to bring it into my application over here. That use case will certainly remain. I think, you know, this, this kind of goes back to like, I think a lot of things today are maybe defaulting to tool use when some of the other primitives would be maybe more appropriate over time. And so it could still be that connector. It could still just be that sort of adapter layer, but could like actually adapt it onto different primitives, which is one, one way to add more value. But then I also think there's plenty of opportunity for use cases, which like do, you know, or for MCP servers that kind of do interesting things in and out themselves and aren't just adapters. Some of the earliest examples of this were like, you know, the memory MCP server, which gives the LLM the ability to remember things across conversations or like someone who's a close coworker built the... I shouldn't have said that, not a close coworker. Someone. Yeah. Built the sequential thinking MCP server, which gives a model the ability to like really think step-by-step and get better at its reasoning capabilities. This is something where it's like, it really isn't integrating with anything external. It's just providing this sort of like way of thinking for a model.Justin/David [00:38:27]: I guess either way though, I think AI authorship of the servers is totally possible. Like I've had a lot of success in prompting, just being like, Hey, I want to build an MCP server that like does this thing. And even if this thing is not. Adapting some other API, but it's doing something completely original. It's usually able to figure that out too. Yeah. I do. I do think that the, to add to that, I do think that a good part of, of what MCP servers will be, will be these like just API wrapper to some degree. Um, and that's good to be valid because that works and it gets you very, very far. But I think we're just very early, like in, in exploring what you can do. Um, and I think as client support for like certain primitives get better, like we can talk about sampling. I'm playing with my favorite topic and greatest frustration at the same time. Um, I think you can just see it very easily see like way, way, way richer experiences and we have, we have built them internally for as prototyping aspects. And I think you see some of that in the community already, but there's just, you know, things like, Hey, summarize my, you know, my, my, my, my favorite subreddits for the morning MCP server that nobody has built yet, but it's very easy to envision. And the protocol can totally do this. And these are like slightly richer experiences. And I think as people like go away from like the, oh, I just want to like, I'm just in this new world where I can hook up the things that matter to me, to the LLM, to like actually want a real workflow, a real, like, like more richer experience that I, I really want exposed to the model. I think then you will see these things pop up, but again, that's a, there's a little bit of a chicken and egg problem at the moment with like what a client supported versus, you know, what servers like authors want to do. Yeah.Alessio [00:40:10]: That, that, that was. That's kind of my next question on composability. Like how, how do you guys see that? Do you have plans for that? What's kind of like the import of MCPs, so to speak, into another MCP? Like if I want to build like the subreddit one, there's probably going to be like the Reddit API, uh, MCP, and then the summarization MCP. And then how do I, how do I do a super MCP?Justin/David [00:40:33]: Yeah. So, so this is an interesting topic and I think there, um, so there, there are two aspects to it. I think that the one aspect is like, how can I build something? I think agentically that you requires an LLM call and like a one form of fashion, like for summarization or so, but I'm staying model independent and for that, that's where like part of this by directionality comes in, in this more rich experience where we do have this facility for servers to ask the client again, who owns the LLM interaction, right? Like we talk about cursor, who like runs the, the, the loop with the LLM for you there that for the server author to ask the client for a completion. Um, and basically have it like summarize something for the server and return it back. And so now what model summarizes this depends on which one you have selected in cursor and not depends on what the author brings. The author doesn't bring an SDK. It doesn't have, you had an API key. It's completely model independent, how you can build this. There's just one aspect to that. The second aspect to building richer, richer systems with MCP is that you can easily envision an MCP server that serves something to like something like cursor or win server. For a cloud desktop, but at the same time, also is an MCP client at the same time and itself can use MCP servers to create a rich experience. And now you have a recursive property, which we actually quite carefully in the design principles, try to retain. You, you know, you see it all over the place and authorization and other aspects, um, to the spec that we retain this like recursive pattern. And now you can think about like, okay, I have, I have this little bundle of applications, both a server and a client. And I can add. Add these in chains and build basically graphs like, uh, DAGs out of MCP servers, um, uh, that can just richly interact with each other. A agentic MCP server can also use the whole ecosystem of MCP servers available to themselves. And I think that's a really cool environment, cool thing you can do. And people have experimented with this. And I think you see hopefully more of this, particularly when you think about like auto-selecting, auto-installing, there's a bunch of these things you can do that make, uh, make a really fun experience. I, I think practically there are some niceties we still need to add to the SDKs to make this really simple and like easy to execute on like this kind of recursive MCP server that is also a client or like kind of multiplexing together the behaviors of multiple MCP servers into one host, as we call it. These are things we definitely want to add. We haven't been able to yet, but like, uh, I think that would go some way to showcasing these things that we know are already possible, but not necessarily taken up that much yet. Okay.swyx [00:43:08]: This is, uh, very exciting. And very, I'm sure, I'm sure a lot of people get very, very, uh, a lot of ideas and inspiration from this. Is an MCP server that is also a client, is that an agent?Justin/David [00:43:19]: What's an agent? There's a lot of definitions of agents.swyx [00:43:22]: Because like you're, in some ways you're, you're requesting something and it's going off and doing stuff that you don't necessarily know. There's like a layer of abstraction between you and the ultimate raw source of the data. You could dispute that. Yeah. I just, I don't know if you have a hot take on agents.Justin/David [00:43:35]: I do think, I do think that you can build an agent that way. For me, I think you need to define the difference between. An MCP server plus client that is just a proxy versus an agent. I think there's a difference. And I think that difference might be in, um, you know, for example, using a sample loop to create a more richer experience to, uh, to, to have a model call tools while like inside that MCP server through these clients. I think then you have a, an actual like agent. Yeah. I do think it's very simple to build agents that way. Yeah. I think there are maybe a few paths here. Like it definitely feels like there's some relationship. Between MCP and agents. One possible version is like, maybe MCP is a great way to represent agents. Maybe there are some like, you know, features or specific things that are missing that would make the ergonomics of it better. And we should make that part of MCP. That's one possibility. Another is like, maybe MCP makes sense as kind of like a foundational communication layer for agents to like compose with other agents or something like that. Or there could be other possibilities entirely. Maybe MCP should specialize and narrowly focus on kind of the AI application side. And not as much on the agent side. I think it's a very live question and I think there are sort of trade-offs in every direction going back to the analogy of the God box. I think one thing that we have to be very careful about in designing a protocol and kind of curating or shepherding an ecosystem is like trying to do too much. I think it's, it's a very big, yeah, you know, you don't want a protocol that tries to do absolutely everything under the sun because then it'll be bad at everything too. And so I think the key question, which is still unresolved is like, to what degree are agents. Really? Really naturally fitting in to this existing model and paradigm or to what degree is it basically just like orthogonal? It should be something.swyx [00:45:17]: I think once you enable two way and once you enable client server to be the same and delegation of work to another MCP server, it's definitely more agentic than not. But I appreciate that you keep in mind simplicity and not trying to solve every problem under the sun. Cool. I'm happy to move on there. I mean, I'm going to double click on a couple of things that I marked out because they coincide with things that we wanted to ask you. Anyway, so the first one is, it's just a simple, how many MCP things can one implementation support, you know, so this is the, the, the sort of wide versus deep question. And, and this, this is direct relevance to the nesting of MCPs that we just talked about in April, 2024, when, when Claude was launching one of its first contexts, the first million token context example, they said you can support 250 tools. And in a lot of cases, you can't do that. You know, so to me, that's wide in, in the sense that you, you don't have tools that call tools. You just have the model and a flat hierarchy of tools, but then obviously you have tool confusion. It's going to happen when the tools are adjacent, you call the wrong tool. You're going to get the bad results, right? Do you have a recommendation of like a maximum number of MCP servers that are enabled at any given time?Justin/David [00:46:32]: I think be honest, like, I think there's not one answer to this because to some extent, it depends on the model that you're using. To some extent, it depends on like how well the tools are named and described for the model and stuff like that to avoid confusion. I mean, I think that the dream is certainly like you just furnish all this information to the LLM and it can make sense of everything. This, this kind of goes back to like the, the future we envision with MCP is like all this information is just brought to the model and it decides what to do with it. But today the reality or the practicalities might mean that like, yeah, maybe you, maybe in your client application, like the AI application, you do some fill in the blanks. Maybe you do some filtering over the tool set or like maybe you, you run like a faster, smaller LLM to like filter to what's most relevant and then only pass those tools to the bigger model. Or you could use an MCP server, which is a proxy to other MCP servers and does some filtering at that level or something like that. I think hundreds, as you referenced, is still a fairly safe bet, at least for Claude. I can't speak to the other models, but yeah, I don't know. I think over time we should just expect this to get better. So we're wary of like constraining anything and preventing that. Sort of long. Yeah, and obviously it highly, it highly depends on the overlap of the description, right? Like if you, if you have like very separate servers that do very separate things and the tools have very clear unique names, very clear, well-written descriptions, you know, your mileage might be more higher than if you have a GitLab and a GitHub server at the same time in your context. And, and then the overlap is quite significant because they look very similar to the model and confusion becomes easier. There's different considerations too. Depending on the AI application, if you're, if you're trying to build something very agentic, maybe you are trying to minimize the amount of times you need to go back to the user with a question or, you know, minimize the amount of like configurability in your interface or something. But if you're building other applications, you're building an IDE or you're building a chat application or whatever, like, I think it's totally reasonable to have affordances that allow the user to say like, at this moment, I want this feature set or at this different moment, I want this different feature set or something like that. And maybe not treat it as like always on. The full list always on all the time. Yeah.swyx [00:48:42]: That's where I think the concepts of resources and tools get to blend a little bit, right? Because now you're saying you want some degree of user control, right? Or application control. And other times you want the model to control it, right? So now we're choosing just subsets of tools. I don't know.Justin/David [00:49:00]: Yeah, I think it's a fair point or a fair concern. I guess the way I think about this is still like at the end of the day, and this is a core MCP design principle is like, ultimately, the concept of a tool is not a tool. It's a client application, and by extension, the user. Ultimately, they should be in full control of absolutely everything that's happening via MCP. When we say that tools are model controlled, what we really mean is like, tools should only be invoked by the model. Like there really shouldn't be an application interaction or a user interaction where it's like, okay, as a user, I now want you to use this tool. I mean, occasionally you might do that for prompting reasons, but like, I think that shouldn't be like a UI affordance. But I think the client application or the user deciding to like filter out the user, it's not a tool. I think the client application or the user deciding to like filter out things that MCP servers are offering, totally reasonable, or even like transform them. Like you could imagine a client application that takes tool descriptions from an MCP server and like enriches them, makes them better. We really want the client applications to have full control in the MCP paradigm. That in addition, though, like I think there, one thing that's very, very early in my thinking is there might be a addition to the protocol where you want to give the server author the ability to like logically group certain primitives together, potentially. Yeah. To inform that, because they might know some of these logical groupings better, and that could like encompasses prompts, resources, and tools at the same time. I mean, personally, we can have a design discussion on there. I mean, personally, my take would be that those should be separate MCP servers, and then the user should be able to compose them together. But we can figure it out.Alessio [00:50:31]: Is there going to be like a MCP standard library, so to speak, of like, hey, these are like the canonical servers, do not build this. We're just going to take care of those. And those can be maybe the building blocks that people can compose. Or do you expect people to just rebuild their own MCP servers for like a lot of things?Justin/David [00:50:49]: I think we will not be prescriptive in that sense. I think there will be inherently, you know, there's a lot of power. Well, let me rephrase it. Like, I have a long history in open source, and I feel the bizarre approach to this problem is somewhat useful, right? And I think so that the best and most interesting option wins. And I don't think we want to be very prescriptive. I will definitely foresee, and this already exists, that there will be like 25 GitHub servers and like 25, you know, Postgres servers and whatnot. And that's all cool. And that's good. And I think they all add in their own way. But effectively, eventually, over months or years, the ecosystem will converge to like a set of very widely used ones who basically, I don't know if you call it winning, but like that will be the most used ones. And I think that's completely fine. Because being prescriptive about this, I don't think it's any useful, any use. I do think, of course, that there will be like MCP servers, and you see them already that are driven by companies for their products. And, you know, they will inherently be probably the canonical implementation. Like if you want to work with Cloudflow workers and use an MCP server for that, you'll probably want to use the one developed by Cloudflare. Yeah. I think there's maybe a related thing here, too, just about like one big thing worth thinking about. We don't have any like solutions completely ready to go. It's this question of like trust or like, you know, vetting is maybe a better word. Like, how do you determine which MCP servers are like the kind of good and safe ones to use? Regardless of if there are any implementations of GitHub MCP servers, that could be totally fine. But you want to make sure that you're not using ones that are really like sus, right? And so trying to think about like how to kind of endow reputation or like, you know, if hypothetically. Anthropic is like, we've vetted this. It meets our criteria for secure coding or something. How can that be reflected in kind of this open model where everyone in the ecosystem can benefit? Don't really know the answer yet, but that's very much top of mind.Alessio [00:52:49]: But I think that's like a great design choice of MCPs, which is like language agnostic. Like already, and there's not, to my knowledge, an Anthropic official Ruby SDK, nor an OpenAI SDK. And Alex Roudal does a great job building those. But now with MCPs is like. You don't actually have to translate an SDK to all these languages. You just do one, one interface and kind of bless that interface as, as Anthropic. So yeah, that was, that was nice.swyx [00:53:18]: I have a quick answer to this thing. So like, obviously there's like five or six different registries already popped up. You guys announced your official registry that's gone away. And a registry is very tempting to offer download counts, likes, reviews, and some kind of trust thing. I think it's kind of brittle. Like no matter what kind of social proof or other thing you can, you can offer, the next update can compromise a trusted package. And actually that's the one that does the most damage, right? So abusing the trust system is like setting up a trust system creates the damage from the trust system. And so I actually want to encourage people to try out MCP Inspector because all you got to do is actually just look at the traffic. And like, I think that's, that goes for a lot of security issues.Justin/David [00:54:03]: Yeah, absolutely. Cool. And then I think like that's very classic, just supply chain problem that like all registries effectively have. And the, you know, there are different approaches to this problem. Like you can take the Apple approach and like vet things and like have like an army of, of both automated system and review teams to do this. And then you effectively build an app store, right? That's, that's one approach to this type of problem. It kind of works in, you know, in a very set, certain set of ways. But I don't think it works in an open source kind of ecosystem for which you always have a registry kind of approach, like similar to MPM and packages and PiPi.swyx [00:54:36]: And they all have inherently these, like these, these supply chain attack problems, right? Yeah, yeah, totally. Quick time check. I think we're going to go for another like 20, 25 minutes. Is that okay for you guys? Okay, awesome. Cool. I wanted to double click, take the time. So I'm going to sort of, we previewed a little bit on like the future coming stuff. So I want to leave the future coming stuff to the end, like registry, the, the, the stateless servers and remote servers, all the other stuff. But I wanted to double click a little bit. A little bit more on the launch, the core servers that are part of the official repo. And some of them are special ones, like the, like the ones we already talked about. So let me just pull them up already. So for example, you mentioned memory, you mentioned sequential thinking. And I think I really, really encourage people should look at these, what I call special servers. Like they're, they're not normal servers in the, in the sense that they, they wrap some API and it's just easier to interact with those than to work at the APIs. And so I'll, I'll highlight the, the memory one first, just because like, I think there are, there are a few memory startups, but actually you don't need them if you just use this one. It's also like 200 lines of code. It's super simple. And, and obviously then if you need to scale it up, you should probably do some, some more battle tested thing. But if you're interested, if you're just introducing memory, I think this is a really good implementation. I don't know if there's like special stories that you want to highlight with, with some of these.Justin/David [00:56:00]: I think, no, I don't, I don't think there's special stories. I think a lot of these, not all of them, but a lot of them originated from that hackathon that I mentioned before, where folks got excited about the idea of MCP. People internally inside Anthropik who wanted to have memory or like wanted to play around with the idea could quickly now prototype something using MCP in a way that wasn't possible before. Someone who's not like, you know, you don't have to become the, the end to end expert. You don't have access. You don't have to have access to this. Like, you know. You don't have to have this private, you know, proprietary code base. You can just now extend cloud with this memory capability. So that's how a lot of these came about. And then also just thinking about like, you know, what is the breadth of functionality that we want to demonstrate at launch?swyx [00:56:47]: Totally. And I think that is partially why it made your launch successful because you launch with a sufficiently spanning set of here's examples and then people just copy paste and expand from there. I would also highligh
If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What
Check out The Link.AI Consulting at https://agentic.constructionConnect with Hugh on LinkedinHere's a shorter briefing based on the same informationExecutive Summary:Anthropic's Model Context Protocol (MCP), announced in late November 2024, is an open protocol designed to standardize how AI systems interact with external data sources and tools. It aims to overcome the current fragmented landscape of AI integration, where bespoke solutions are often required for each new connection. MCP establishes a universal framework for communication, simplifying development, enhancing AI agent effectiveness through improved context and tool access, and fostering a vibrant ecosystem of AI capabilities. By utilizing a client-server architecture and defining key primitives for data and action exchange, MCP offers a more dynamic and context-aware approach compared to traditional REST APIs. The emergence of MCP registries and marketplaces like smithery.ai further signifies its potential to transform the future of AI by enabling more interconnected, adaptable, and powerful AI systems.Key Themes and Important Ideas/Facts:1. Addressing the Challenges of AI Integration:The current method of integrating AI models with external resources is often complex and requires custom solutions for each connection. "When building AI applications today, each project frequently requires unique, bespoke solutions for how AI processes are constructed and how they connect with necessary data resources." (Introduction)This leads to significant development and maintenance burdens.MCP aims to solve this by providing a universal, open standard for connecting AI systems with data sources and tools. "MCP offers a unified solution to this problem by providing a universal, open standard for connecting AI systems with data sources, effectively replacing these fragmented integrations with a single, consistent protocol." (Introduction)The motivation is to overcome the limitations of isolated AI models "trapped behind information silos and legacy systems." (Introduction, citing source 2)MCP addresses the "MxN problem" by transforming it into an "N plus M setup," where each model and tool only needs to conform to the standard once. "Without a standardized protocol, this results in a complex web of M multiplied by N individual integrations... MCP's approach transforms this into a much simpler N plus M setup, where each tool and each model only needs to conform to the MCP standard once..." (Introduction, citing source 3)By open-sourcing MCP, Anthropic intends to foster collaboration and a shared ecosystem.2. Core Concepts of MCP:Client-Server Architecture: MCP is built on this established pattern. "At its core, the Model Context Protocol (MCP) is built upon a client-server architecture, a well-established design pattern in computing, to facilitate the connection between AI models and external resources." (Core Concepts)Host: The AI-powered application or agent environment the user interacts with (e.g., Claude desktop app, IDE plugin). "The Host is the AI-powered application or agent environment that the end-user directly interacts with." (Core Concepts) It can connect to multiple MCP servers and manages client permissions.Client: An intermediary within the Host that manages the connection to a single MCP server, maintaining a one-to-one link. "The Client acts as an intermediary within the Host, responsible for managing the connection to a single MCP server." (Core Concepts) It handles communication lifecycle and maintains stateful sessions.Server: An external program that implements MCP and provides capabilities (tools, data, prompts) for a specific domain (e.g., databases, cloud services). "The Server is a program, typically external to the AI model itself, that implements the MCP standard and provides a specific set of capabilities." (Core Concepts) Anthropic and the community have released servers for Google Drive, Slack, GitHub, Postgres, SQLite, and web browsing.This architecture is likened to a "USB port" for AI. "This client-server architecture, often likened to a 'USB port' for AI applications, provides a standardized way for AI assistants to 'plug into' any data source or service without requiring custom code for each connection." (Core Concepts, citing source 3)3. MCP vs. REST APIs for AI Agents:Limitations of REST APIs: Require significant manual effort, lack standardized context management, often stateless, static API definitions. "Integrating AI agents with external services via REST APIs often requires significant manual effort and lacks a standardized way to manage the evolving context of agent interactions." (MCP vs. REST APIs for AI Agents)Advantages of MCP:Standardized Communication: Based on JSON-RPC, simplifying integration.Dynamic Tool Discovery: AI can query servers to understand available tools. "AI models equipped with an MCP client can query connected servers to understand the tools and resources they offer." (MCP vs. REST APIs for AI Agents)Two-Way Real-Time Interaction: Supports persistent connections for context updates.Superior Approach Scenarios: Complex workflows with multiple tools, real-time data integration, frequently changing toolsets, intelligent assistants, automated coding tools, dynamic data analytics.4. Enhancing AI Agent Effectiveness:Improved Contextual Awareness and Management: MCP allows agents to access and retain relevant context from multiple sources, overcoming context window limitations. "One of the most significant ways in which the Model Context Protocol enhances the effectiveness of AI agents is by enabling improved contextual awareness and management." (Enhancing AI Agent Effectiveness)The ability to connect to multiple servers simultaneously supports complex workflows.The "Resources" primitive provides just-in-time, modular context, leading to more efficient processing and accurate responses.Facilitating Seamless Integration: MCP eliminates the need for custom code for each new data source or tool. "By providing a standardized interface, MCP eliminates the need for developers to write custom code for each new data source or tool that an AI agent needs to interact with." (Enhancing AI Agent Effectiveness)Pre-built servers for popular systems (Google Drive, Slack, GitHub, databases) streamline integration.Supporting Advanced Reasoning and Decision-Making: The "Tools" primitive allows agents to invoke functions and access real-time data.The "Sampling" primitive enables complex, multi-step reasoning processes (with recommended human approval).Real-World Examples:Corporate chatbots querying multiple internal systems.AI-powered coding assistants (Sourcegraph Cody, Zed Editor) accessing codebases.Anthropic's Claude Desktop accessing local files. "By integrating MCP, Claude can securely access local files, applications, and services on the user's computer." (Enhancing AI Agent Effectiveness)AI2SQL generating SQL from natural language.Apify allowing AI agents to access Apify Actors for automation.5. Driving Adoption for AI Tool Providers:Standardized Integration: Reduces the complexity and costs of developing and maintaining multiple custom integrations. "By providing a single, open standard for connecting AI models with tools, MCP reduces the need for tool providers to develop and maintain multiple custom integrations tailored to different AI platforms." (Driving Adoption for AI Tool Providers)Increased Interoperability: Tools can work with any MCP-compatible AI model, broadening the potential user base and reducing vendor lock-in. "Tools built using the MCP standard can seamlessly work with any AI model that has implemented an MCP client, regardless of the AI provider (e.g., Anthropic, OpenAI) or whether it's an open-source model." (Driving Adoption for AI Tool Providers)Opportunities for Innovation and Specialization: Enables developers to create specialized servers that can be accessed by any MCP client, fostering a division of labor.Benefits for Scalability and Future-Proofing: Ensures integrations remain compatible with future AI models adhering to the standard.6. Real-World Use Cases and Examples of MCP Implementation (Detailed):Coding Assistants: Sourcegraph Cody and Zed Editor.Enterprise Integrations: Block and Apollo. "Companies like Block and Apollo have adopted MCP to securely connect their AI systems with internal data repositories and customer relationship management (CRM) systems." (Real-World Use Cases and Examples of MCP Implementation)Desktop AI Applications: Anthropic's Claude Desktop.Data Querying Tools: AI2SQL.Automation Platforms: Apify.Community-Built Servers: Numerous servers on platforms like Smithery.ai and mcp-get.com for databases, cloud services, etc.7. Future Implications and the Evolving AI Ecosystem:Fostering Interoperability and Standardization: MCP has the potential to become a universal standard for AI integration. "By establishing a universal standard for AI integration, MCP could become the equivalent of HTTP for the web or USB-C for device connectivity in the AI world." (Future Implications and the Evolving AI Ecosystem)Could decouple AI model choice from underlying integrations.Potential Impact on AI R&D and Deployment: May shift focus towards effective utilization of external information over solely increasing model size. Could lead to more modular AI system designs.Addressing Potential Challenges: Requires buy-in from AI providers and tool developers. Security is paramount. Ensuring user trust and human oversight are crucial. "Security is another paramount concern. Allowing AI agents to access and interact with external systems, especially sensitive enterprise data, necessitates robust security measures to prevent unauthorized access or data leaks." (Future Implications and the Evolving AI Ecosystem)Conclusion:MCP offers a promising path towards a more interconnected, context-aware, and effective AI ecosystem. Its standardized framework addresses critical integration challenges, enhances AI agent capabilities, and provides new opportunities for tool providers and the broader AI community. While adoption challenges exist, the potential transformative impact of MCP on the future of AI is significant.
An airhacks.fm conversation with Alvaro Hernandez (@ahachete) about: discussion about stackgres as a complete database cloud solution for PostgreSQL, kubernetes as an abstraction layer over infrastructure providing a programmable API, Stackgres offering high availability with primary and replica nodes using patroni, integrated connection pooling with PgBouncer, kubernetes operators and Custom Resource Definitions (CRDs) as a powerful way to extend Kubernetes, day two operations automated through CRDs including benchmarks and version upgrades, Stackgres supporting sharding with Citus for horizontal scaling similar to DynamoDB, Change Data Capture capabilities using embedded debezium, failover mechanisms taking typically 30 seconds with DNS updates, synchronous vs asynchronous replication options affecting data loss during failover, Stackgres being implemented in Java using quarkus, ContainerD as a programmable container runtime that can be used without Kubernetes, Stackgres offering multiple interfaces including CRDs, REST API, and a web console, considerations for running databases on Kubernetes vs cloud-managed services, the advantages of containerization for infrastructure, the challenges of multi-leader setups in PostgreSQL requiring conflict resolution, the value of Kubernetes for on-premises deployments vs cloud environments Alvaro Hernandez on twitter: @ahachete
In this episode of Quality Matters, host Andy Reynolds is joined by Ed Yurcisin, Chief Technology Officer at NCQA, to break down the complexities of digital transformation in health care quality. Ed explains how NCQA's push for digital measurement cuts through inefficiencies and inconsistencies in assessing quality. Traditionally, HEDIS® quality measures have existed as large, text-heavy PDFs, leaving room for misinterpretation. By digitalizing these measures into computer code—Clinical Quality Language (CQL)—NCQA removes ambiguity and standardizes interpretation. That makes it easier for health care organizations to implement and use quality measures. This shift reduces administrative burden and helps ensure that quality assessments are more accurate and actionable.The conversation then shifts to FHIR® (Fast Healthcare Interoperability Resources), a standard designed to streamline health care data exchange. Ed explains that while FHIR might sound intimidating, it's built on the basic web technologies that power everyday internet browsing. FHIR brings five essential components to the table—JSON files, Rest APIs, standardized value sets, a common data model and government-mandated data exchange. While the government requires organizations to “pitch” data (make data available), there's no mandate to “catch” data (actually use the data). That means organizations that choose to use the data gain a competitive advantage.The discussion ends by focusing on data quality, an issue that looms large over digital transformation efforts. Ed introduces the Bulk FHIR Quality Coalition, a collaborative initiative aimed at improving the reliability of data exchanged between health care providers and insurers. Using the analogy of water through pipes, Ed explains that current data-sharing efforts help ensure flow, but don't always guarantee that data are “clean” enough to be useful. The coalition enhances existing provider–insurer relationships to test and improve large-scale data exchange methods. Ultimately, Ed underscores that digital transformation in health care is only as strong as the quality of the data being exchanged. Standardization, accessibility and interoperability are the foundations of progress, ensuring that technology-driven solutions improve enhance outcomes. Digital HEDIS, FHIR and the Bulk FHIR Quality Coalition are examples of how NCQA is reducing measurement burden to streamline measurement and improve quality. Key Quote:“ The digital transformation of health care is necessary to deliver higher quality care. But that is dependent on high-quality data and the ability to exchange this data. It starts with high-quality data–making it accessible, interoperable, exchangeable. That is the foundation for being able to deliver digital health care transformation. Nothing in digital transformation in health care makes sense without high-quality data exchange.”-Ed YurcisinTime Stamps:(1:03) The How and Why of Digital Measurement(04:14) Understanding FHIR(08:32) From Data Exchange to Competitive Advantage(10:42) The Bulk FHIR Quality CoalitionLinks:Connect with Edward YurcisinNCQA Digital Hub Bulk FHIR Quality Coalition
In this episode, we discuss using AWS Lambda Powertools for Python to build serverless REST APIs with AWS Lambda. We cover the benefits of using Powertools for routing, validation, OpenAPI support, and more. Powertools provides an excellent framework for building APIs while maintaining Lambda best practices.In this episode, we mentioned the following resources: AWS Bites 41. How can Middy make writing Lambda functions easier? - https://awsbites.com/41-how-can-middy-make-writing-lambda-functions-easier AWS Bites 120. Lambda Best Practices - https://awsbites.com/120-lambda-best-practices/ REST API - Powertools for AWS Lambda (Python) - https://docs.powertools.aws.dev/lambda/python/latest/core/event_handler/api_gateway/ Hono - https://hono.dev/ Fastify - https://fastify.dev/ Axum - https://github.com/tokio-rs/axum FastAPI - https://fastapi.tiangolo.com/Do you have any AWS questions you would like us to address?Leave a comment here or connect with us on BlueSky or LinkedIn: https://bsky.app/profile/eoin.sh | https://www.linkedin.com/in/eoins/ https://bsky.app/profile/loige.co | https://www.linkedin.com/in/lucianomammino/
Trustfall is a library based on a simple question - what happens if we can query absolutely anything? If you could join REST APIs and databases with filesystems and dockerfiles? It's possible in theory because those are all just datasources. Predrag Gruevski is trying to make it easy by building a universal query engine, with pluggable datasources, all in Rust.This week we dive into Trustfall to figure out how it works. How do you model nearly anything as a datasource? How do you make it easy to extend? And what does it take to optimize a query that's going to be spread out over multiple systems and potentially multiple servers? Questions, questions, questions - all about the act of asking our systems questions.
Send us a textWeb Crawler DesignsCan a simple idea like building a web crawler teach you the intricacies of system design? Join me, Ben Kitchell, as we uncover this fascinating intersection. Returning from a brief pause, I'm eager to guide you through the essential building blocks of a web crawler, from queuing seed URLs to parsing new links autonomously. These basic functionalities are your gateway to creating a minimum viable product or acing that system design interview. You'll gain insights into potential extensions like scheduled crawling and page prioritization, ensuring a strong foundation for tackling real-world challenges.Managing a billion URLs a month is no small feat, and scaling such a system requires meticulous planning. We'll break down the daunting numbers into digestible pieces, exploring how to efficiently store six petabytes of data annually. By examining different database models, you'll learn how to handle URLs, track visit timestamps, and keep data searchable. The focus is on creating a robust system that not only scales but does so in a way that meets evolving demands without compromising on performance.Navigating the complexities of designing a web crawler means making critical decisions about data storage and system architecture. We'll weigh the benefits of using cloud storage solutions like AWS S3 and Azure Blob Storage against maintaining dedicated servers. Discover the role of REST APIs in seamless user and service interactions, and explore search functionalities using Cassandra, Amazon Athena, or Google's BigQuery. Flexibility and foresight are key as we build systems that adapt to future needs. Thank you for your continued support—let's keep learning and growing on this exciting system design journey together.Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.
Joël and Stephanie go back to fundamentals as they pick apart some recent conversations they've been having around the office. Together they discuss the advantages of GraphQL over a REST API, how they utilise JSONB over a regular column or table, and the use-cases for and against a frontend framework like React. But what's the theme that ties all these conversations together? — The article mentioned in this episode was Why I'm over GraphQL (https://bessey.dev/blog/2024/05/24/why-im-over-graphql/) Your hosts for this episode have been thoughtbot's own Stephanie Minn and Joël Quenneville (https://www.linkedin.com/in/joel-quenneville-96b18b58/). If you would like to support the show, head over to our GitHub page (https://github.com/sponsors/thoughtbot), or check out our website (https://bikeshed.thoughtbot.com). Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot (https://thoughtbot.com/) podcast. Stay up to date by following us on social media - LinkedIn (https://www.linkedin.com/company/150727/) - Mastodon (https://thoughtbot.social/@thoughtbot) - Instagram (https://www.instagram.com/thoughtbot/) © 2024 thoughtbot, inc.
Fredrik snackar Postgresql med Svante Richter. Är Postgresql gravt underanvänd när man tänker på allt databasen klarar av? Är vi onödigt rädda för att låta databasen sköta om saker, och vad beror det i så fall på? Vad händer när man försöker ersätta hela sitt backend med Postgresql? Och vad vill man möjligen inte ha Postgresql till? Inspelat under Øredev 2024, där Svante höll en presentation på samma ämne. Avsnittet sponsras av 46elks, som bygger lättanvända API:er för telefoni och SMS. Hur enkla? Här är ett Curl-exempel: curl https://api.46elks.com/a1/sms -u API_USERNAME:API_PASSWORD -d to=+46766861004 -d message="Hej kodsnacklyssnare! Testa att skicka ditt första SMS med Curl." -d from=Kodsnack Skicka notiser per SMS, ring upp folk, ordna telefonväxlar, och mycket mer. API-dokumentationen hittar du på 46elks.se/docs. Registrera dig via 46elks.se/kodsnack och få 200 krediter att experimentera med! Ett stort tack till Cloudnet som sponsrar vår VPS! Har du kommentarer, frågor eller tips? Vi är @kodsnack, @thieta, @krig, och @bjoreman på Mastodon, har en sida på Facebook och epostas på info@kodsnack.se om du vill skriva längre. Vi läser allt som skickas. Gillar du Kodsnack får du hemskt gärna recensera oss i iTunes! Du kan också stödja podden genom att ge oss en kaffe (eller två!) på Ko-fi, eller handla något i vår butik. Länkar Øredev Advent of code Kåre Kåre byggde sitt eget språk för Advent of code Kodsnacks Advent of code-repo Svante nås med fördel i vår Slack-kanal Andra avsnitt med Svante Postgresql Axis Svantes presentation är inte ute än - länk kommer när den publicerats Svantes presentationsbilder Postgrest - skapar REST-API automatiskt CRUD - Create, read, update and delete ORM Mongodb Nosql Timescale Postgis Extensions för Postgresql 46elks - veckans sponsor 46elks.se/kodsnack - registrera dig och få 200 krediter att experimentera med! Webhooks Lua JSON-schema Rabbitmq Dead letter queue Foreign keys Next.js HTMX Stöd oss på Ko-fi Nginx Webdav Haskell Lars Wikman Unison - tvåvägsversion av Rsync Ocaml Elm Noredink Neon Supabase Hetzner Elasticsearch Openstreetmap Dynamodb Neo4j Nearest neighbor Lagringsmotorer för Postgresql Devoteam Skruv - Svantes mikroramverk Krausest-prestandamätningarna Svante jämför Skruv mot Titlar Tryck undan ångesten Använda men underanvända Lösa problem som man själv skapat Vilken fråga blir det faktiskt? Databasfunktioner i Lua Är detta smart? Gå på tok för långt Vilka bitar var det som skavde? En hel klass av problem Databasen är någon annans domän Vem låter man inkräkta mest? Undvika att behöva ha ett backend Den kortaste vägen mellan två noder Jag är lite radikal
2024-10-22 Weekly News — Episode 221Watch the video version on YouTube at https://youtube.com/live/j-e_y4OwuCw?feature=shareHosts: Gavin Pickin - Senior Developer at Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 59 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsLucee 6.1.1 (6.1.1.100-RC) Release CandidateThere is a new Lucee 6.1.1.100-RC release candidate available for testing. Give it a try and share your feedback with us.What's New?This release focuses mainly on bug fixes, along with a few useful enhancements.https://dev.lucee.org/t/lucee-6-1-1-6-1-1-100-rc-release-candidate/14353 ColdFusion 2023 and 2021 October 15th, 2024 updatesWe are pleased to announce that we have released general updates to ColdFusion (2023 release) Update 11 and ColdFusion (2021 release) Update 17. The updates include bug fixes and enhancements in Administrator, Language, CFSetup, Database, and other areas. They also contain library upgrades, such as netty, ehcache, etc. The updates also include enhancements to whitespace management and client variable support in CFPM.Known issues in the updateThe PDF Services page in ColdFusion Administrator does not load even with the HTMLToPDF package installedhttps://coldfusion.adobe.com/2024/10/released-coldfusion-2023-and-2021-october-15th-2024-updates/CF Summit India AnnouncedWe are excited to announce that the Adobe ColdFusion India Summit 2024 is happening on December 7, 2024, and this year, we're bringing the event to two vibrant cities: Bengaluru and Noida. Whether you're a seasoned developer or just beginning your journey in web development, this free summit offers a unique opportunity to learn, connect, and grow with the best minds in the industry.https://coldfusion.adobe.com/2024/10/get-ready-for-adobe-coldfusion-india-summit-2024/ Announcing Java updates of Oct 2024 for 8, 11, 17, 21, and 23: thoughts and resourcesIt's that time again: there are new JVM updates released today (Oct 15, 2024) for the current long-term support (LTS) releases of Oracle Java, 8, 11, 17, and 21, as well as the new short-term release 23. (The previous short-term release, Java 22, is no longer updated.)TLDR: The new updates are 1.8.0_431 (aka 8u431), 11.0.25, 17.0.13, 21.0.5, and 23.0.1 respectively. Crazy that there are now 5 current Java releases, I realize. More below, including more on each of them including what changed as well as bug fixes and the security fixes each version contains (including their CVE scores regarding urgency of concerns), which are offered in Oracle resources I list below.https://www.carehart.org/blog/2024/10/15/java_updates_oct_2024 PayPal's NVP/SOAP API for Website Payments Pro accounts suddenly stopped working sometime early OctoberPayPal's NVP/SOAP API for Website Payments Pro accounts suddenly stopped working sometime around October 4th (possibly Sep 30). Some developers that reported having the issue were using legacy classic ASP and others were using ColdFusion. I believe we've been using the PayPal DoDirectPayment API since it was introduced back in 2002.At some point, PayPal added the following undated disclaimer to their documentation. (According to Microsoft Copilot, "PayPal's NVP (Name-Value Pair) API was marked as "legacy" around October 12th, 2021".)CFPayment (retired) supports WPP & Payflow, but not the new REST API method. Searching online for "ColdFusion (or cfml) paypal rest api" didn't return anything beneficial, so it became apparent that there was a need for a solution... any solution.James Moberg has an updated Paypal Rest API Cfc available here: https://dev.to/gamesover/coldfusion-paypal-rest-api-cfc-339p Secure Your ColdFusion Perpetual License Before Adobe's Subscription-Only SwitchFollowing Adobe's announcement at the Adobe ColdFusion Summit in Las Vegas, ColdFusion will transition to a subscription-only licensing model. This major shift in licensing strategy means developers and organizations have a limited window to secure their final perpetual ColdFusion license.While we don't know the date for the Adobe switch, FusionReactor customers have an exclusive opportunity to secure their last perpetual license and save significantly in the process. This final offer has been extended to December 31, 2024, giving organizations more months to make this crucial decision.https://fusion-reactor.com/blog/secure-your-coldfusion-perpetual-license-before-adobes-subscription-only-switch/ Microsoft Copilot is a little Snarky about ColdFusion
GraphQL is an open-source query language for APIs and a runtime for executing those queries. It was developed by Facebook to address the problem of over-fetching or under-fetching data, which is a common issue with traditional REST APIs. Matt Bessey is a Principal Engineer and Software Architect. Earlier this year Matt wrote a blog post The post The End of GraphQL with Matt Bessey appeared first on Software Engineering Daily.
GraphQL is an open-source query language for APIs and a runtime for executing those queries. It was developed by Facebook to address the problem of over-fetching or under-fetching data, which is a common issue with traditional REST APIs. Matt Bessey is a Principal Engineer and Software Architect. Earlier this year Matt wrote a blog post The post The End of GraphQL with Matt Bessey appeared first on Software Engineering Daily.
Alba Rivas, a passionate Salesforce Developer Advocate, shares her insights on the power of Salesforce's Models API. Discover how this API enables developers to seamlessly integrate large language models via REST API or Apex, eliminating the need for custom code or prompt templates. Alba highlights the critical role of the Einstein Trust Layer in ensuring data security, compliance, and efficient integration with external models. Gain knowledge on AI implementation, including maintaining audit trails, monitoring data for toxicity, and using human feedback for refining AI systems. We also explore the flexibility and capabilities of the Models API, offering a centralized interface for developers, along with resources like Postman collections, codeLive sessions, and an upcoming Trailhead module to support your journey in mastering generative AI with Salesforce. Show Highlights: Discussion on the Einstein Trust Layer's role in ensuring security, data masking, and compliance for generative AI applications. Comparison between using prompt templates and the Models API. Importance of maintaining an audit trail. Focus on the value of human feedback in refining AI systems and the benefits of a centralized interface for developers. Insights into practical AI implementation and the flexibility offered by the Models API. Links: Postman Collection - https://www.postman.com/salesforce-developers/workspace~34382471-0c97-40e5-a206-f947271665c4/collection/onih7sc/models-apis-beta Einstein Models API Documentation - https://developer.salesforce.com/docs/einstein/genai/guide/models-api.html codeLive: Exploring the New Models API (Beta) - https://www.youtube.com/watch?v=hf95_ghGeNI
In this episode, we sit down with expert Akshata Sawant to discuss MuleSoft Intelligent Document Processing (IDP). Akshata delves into how IDP uses AI to extract and organize data from a variety of unstructured documents, such as purchase orders, invoices, and driving licenses, using AWS Textract for exact information extraction. The discussion highlights IDP's seamless integration with multiple systems via REST API endpoints and its application in automating the processing of daily invoices received via email. Learn about IDP's ability to handle different document structures without needing multiple templates and the crucial role of human oversight when the AI's confidence ratio falls below 80%, ensuring accuracy and reliability. This episode offers a comprehensive look at how IDP can revolutionize document workflows, balancing automation with human intervention, and provides valuable resources for further learning. Show Highlights: Using AI to extract and organize data from unstructured documents Integration of AWS Textract for exact information extraction and seamless integration via REST API endpoints Accessing and using IDP through the Anypoint Platform with ease Managing different document structures without multiple templates and the role of human oversight for accuracy Links: Getting Started with Intelligent Document Processing: Part 1 - https://www.youtube.com/watch?v=Yi160vy7gg4 Getting Started with Intelligent Document Processing: Part 2 - https://www.youtube.com/watch?v=iGvAypuzAvw How to Get Started With Intelligent Document Processing - https://blogs.mulesoft.com/dev-guides/how-to-get-started-with-intelligent-document-processing/
Summary In this episode of the Blue Security Podcast, Andy and Adam discuss Microsoft's Defender for Storage, a cloud-native security solution for Azure Blob Storage, Azure Files, and Azure Data Lake Storage. They highlight the three major impacts on data workloads: malicious file uploads, sensitive data exfiltration, and data corruption. The solution offers activity monitoring, malware scanning, and sensitive data threat detection. They also mention the pricing model, the integration with Microsoft Purview, and the ease of deployment using Azure Policy and Logic Apps. Takeaways -Defender for Storage is a cloud-native security solution for Azure Blob Storage, Azure Files, and Azure Data Lake Storage. -The solution protects against malicious file uploads, sensitive data exfiltration, and data corruption. -It offers activity monitoring, malware scanning, and sensitive data threat detection. -Integration with Microsoft Purview allows for seamless inheritance of sensitivity settings. -Deployment can be done through the Azure portal, Azure Policy, or infrastructure as code using the REST API. -Logic Apps can be used to automate responses and streamline security operations. -A pre-purchase plan is available for Defender for Cloud workloads, offering programmatic discounts and predictable billing. ---------------------------------------------------- YouTube Video Link: https://youtu.be/_DNCcy4V5Uo ---------------------------------------------------- Documentation: https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-storage-introduction ---------------------------------------------------- Contact Us: Website: https://bluesecuritypod.com Twitter: https://twitter.com/bluesecuritypod LinkedIn: https://www.linkedin.com/company/bluesecpod YouTube: https://www.youtube.com/c/BlueSecurityPodcast ----------------------------------------------------------- Andy Jaw Twitter: https://twitter.com/ajawzero LinkedIn: https://www.linkedin.com/in/andyjaw/ Email: andy@bluesecuritypod.com ---------------------------------------------------- Adam Brewer Twitter: https://twitter.com/ajbrewer LinkedIn: https://www.linkedin.com/in/adamjbrewer/ Email: adam@bluesecuritypod.com
Solfate Podcast - Interviews with blockchain founders/builders on Solana
Full notes for this episode at: https://solfate.com/podcast/61Follow @SolfatePod on Twitter for updates: https://twitter.com/SolfatePodPS: We are happy to announce the launch of our Solfate Drip channel where you can collect Solfate Podcast episodes as digital collectibles! Subscribe to our Drip channel today: drip.haus/solfate (we have some fun experiments planned)Notes from the showThe conversation revolves around the introduction of Actions and Blinks, a technology platform and protocol developed by Dialect and Solana Foundation. Actions and Blinks allow users to share Solana anywhere on the internet by unfurling URLs and providing interactive experiences. The idea originated from the concept of smart messages and transaction requests in Solana Pay. The launch of Actions and Blinks was a collaborative effort between Dialect, Solana Foundation, and various wallets.The discoverability and accessibility of Blinks are limited to specific browsers with wallet extensions installed. However, there are plans to make Blinks more widely available and user-friendly. The conversation explores the challenges and potential of Blinks and Solana Actions in revolutionizing the internet. The guests discuss the uphill battle of creating an open ecosystem in the current environment of mega companies and intentional blockades.Chris and Nick highlight the power of Blinks in bringing value transfer and interaction to users without leaving the app. The technical nature of Blinks and Solana Actions is explained, emphasizing the backend REST API of Actions and the frontend UI of Blinks. The guests also mention the future possibilities of mobile integration and the importance of developer feedback and collaboration.TakeawaysActions and Blinks allow users to share Solana anywhere on the internet by unfurling URLs and providing interactive experiences.The idea of Actions and Blinks originated from the concept of smart messages and transaction requests in Solana Pay.The launch of Actions and Blinks was a collaborative effort between Dialect, Solana Foundation, and various wallets.Currently, the discoverability and accessibility of Blinks are limited to specific browsers with wallet extensions installed, but there are plans to make Blinks more widely available and user-friendly. Creating an open ecosystem in the current internet environment is an uphill battle against mega companies and intentional blockades.Blinks bring value transfer and interaction to users without leaving the app, enhancing the user experience and stickiness.Blinks and Solana Actions have a backend REST API for transaction construction and delivery, and a frontend UI for rendering and user interaction.The technical nature of Blinks and Solana Actions allows for easy development and customization, making it accessible for developers.Mobile integration and native mobile implementations of Blinks are the next steps in the evolution of the technology.Find Chris and Dialect onlineFollow Chris on twitter - @aliquotchrisFollow Dialect on twitter - @SayDialectDialect's website - dialect.toFollow us aroundNicktwitter: @nickfrostygithub: github.com/nickfrostywebsite: https://nick.afJamestwitter: @jamesrp13github: github.com/jamesrp13Solfate Podcasttwitter: @SolfatePodmore podcast episodes: solfate.com/podcastcollect episodes on Drip: drip.haus/solfate
News includes Chris McCord's speedrun video on adding a self-hosted llama2-7b to an existing application, Tyler Young's release of parameterized_test v0.2.0, major updates in Oban Pro's new launch week, potential for CRDTs being added to Mnesia DB, Zach Daniel's blog post on Igniter for code generation, and a preview of ElixirConf 2024 with exciting speakers and topics, and more! Show Notes online - http://podcast.thinkingelixir.com/213 (http://podcast.thinkingelixir.com/213) Elixir Community News - https://x.com/chris_mccord/status/1815409966611648705 (https://x.com/chris_mccord/status/1815409966611648705?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord does a YouTube video speedrun of adding a self-hosted llama2-7b to an existing application. He's running it against Ollama and making REST API calls to it, showing how to run the Ollama server on a private Fly.io IPv6 network using auto-stop and auto-start features. - https://x.com/TylerAYoung/status/1815391743484870980 (https://x.com/TylerAYoung/status/1815391743484870980?utm_source=thinkingelixir&utm_medium=shownotes) – Tyler Young shared a new release of his library parameterizedtest, version v0.2.0, which includes support for longer test names, comments in tables, and Obsidian markdown table format. - https://github.com/s3cur3/parameterized_test (https://github.com/s3cur3/parameterized_test?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for Tyler Young's parameterizedtest library that makes it easier to create tests using multiple combinations in markdown tables. - https://x.com/Exadra37/status/1815694986345611683 (https://x.com/Exadra37/status/1815694986345611683?utm_source=thinkingelixir&utm_medium=shownotes) – The Mnesia database may receive native support for automated conflict resolution via CRDTs, sponsored by ErlangSolutions and developed by Vincent Lau. - https://www.youtube.com/watch?v=tHdPRyMjmW8 (https://www.youtube.com/watch?v=tHdPRyMjmW8?utm_source=thinkingelixir&utm_medium=shownotes) – Vincent Lau spoke at Code BEAM Europe 2023 about his work on adding CRDTs to Mnesia for automated conflict resolution. - https://www.erlang.org/doc/apps/mnesia/mnesia.html (https://www.erlang.org/doc/apps/mnesia/mnesia.html?utm_source=thinkingelixir&utm_medium=shownotes) – Documentation on Mnesia, a distributed key-value DBMS built into Erlang. - https://x.com/sorentwo/status/1791166342034255938 (https://x.com/sorentwo/status/1791166342034255938?utm_source=thinkingelixir&utm_medium=shownotes) – Oban Pro's launch week introduces unified migrations, worker aliases, better support for distributed databases, faster unique job checks, and the @job decorator for small jobs. - https://x.com/sorentwo/status/1807155900609904973 (https://x.com/sorentwo/status/1807155900609904973?utm_source=thinkingelixir&utm_medium=shownotes) – Improvements in Oban Pro include better batch workflows with mermaid visualizations. - https://getoban.pro/articles/pro-1-5-launch-week-day-1 (https://getoban.pro/articles/pro-1-5-launch-week-day-1?utm_source=thinkingelixir&utm_medium=shownotes) – Article on Oban Pro's launch week, detailing new features and improvements. - https://getoban.pro/articles/pro-1-5-launch-week-day-2 (https://getoban.pro/articles/pro-1-5-launch-week-day-2?utm_source=thinkingelixir&utm_medium=shownotes) – Second day of Oban Pro's launch week article series. - https://getoban.pro/articles/pro-1-5-launch-week-day-3 (https://getoban.pro/articles/pro-1-5-launch-week-day-3?utm_source=thinkingelixir&utm_medium=shownotes) – Third day of Oban Pro's launch week article series. - https://alembic.com.au/blog/igniter-rethinking-code-generation-with-project-patching (https://alembic.com.au/blog/igniter-rethinking-code-generation-with-project-patching?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post by Zach Daniel about Igniter, a tool for rethinking code generation with project patching, useful for installing libraries into existing Phoenix applications. - https://2024.elixirconf.com/ (https://2024.elixirconf.com/?utm_source=thinkingelixir&utm_medium=shownotes) – ElixirConf 2024 Preview with details on scheduled speakers and topics. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Could integrating generative AI revolutionize your business processes? Join us as we sit down with Charles Watkins, the newest Salesforce Developer Advocate, to unveil the transformative capabilities of Salesforce's Prompt Builder. Charles shares a compelling example from his own support background, illustrating how this innovative tool can streamline the summarization and categorization of support cases. Learn how Prompt Builder simplifies switching between large language models like OpenAI and Microsoft Azure, making advanced AI accessible without complex integrations. We also decode Salesforce's cutting-edge Models API, currently in beta, offering a granular, controlled approach to generative AI. Charles discusses how the API can be harnessed to build chat-based conversational AI apps, embeddings, and more using REST API and Apex interfaces. Discover the security advantages of the Einstein Trust Layer, which ensures data privacy, and hear about the impressive response times of models like GPT-4. This episode is a treasure trove for developers eager to experiment with Salesforce's open-gen AI ecosystem. Don't miss this opportunity to get inspired and elevate your AI game. Show Highlights: The Prompt Builder tool for automating business processes like summarizing and categorizing support cases. Simplified integration with multiple large language models such as OpenAI and Microsoft Azure. The new Models API for creating chat-based AI apps, embeddings, and text generation solutions. Benefits of the Einstein Trust Layer for enhanced security and data privacy. The rapid response times of models like GPT-4 and the lack of a streaming API. Links: Applying Fundamentals of Prompt Engineering with Prompt Builder - https://developer.salesforce.com/blogs/2024/07/applying-fundamentals-of-prompt-engineering-with-prompt-builder Using Prompt Builder, Flows, and Apex to Summarize and Classify Cases Faster - https://developer.salesforce.com/blogs/2024/06/using-prompt-builder-flows-and-apex-to-summarize-and-classify-cases-faster Prompt Builder for Developers Playlist - https://www.youtube.com/playlist?list=PLgIMQe2PKPSIarRb6ud6odVxB1gv-hbO8 Einstein 1 Studio Developer Center - https://developer.salesforce.com/developer-centers/einstein-1-studio
Dive into the world of GraphQL APIs on AWS this week! We'll explore the recently launched feature in AppSync: asynchronous Lambda functions for GraphQL resolvers. But first, we'll break down the advantages of GraphQL over REST APIs and the limitations of synchronous calls in GraphQL. Then, we'll uncover the power of async Lambdas: stream data directly to your client for a more responsive experience and unlock innovative use cases, like generative AI-powered chatbots built with Lambdas. Curious how this can transform your applications? Tune in to learn more! With Derek Bingham, Developer Advocate, AWS https://www.linkedin.com/in/derekwbingham/ - Derek's blog about AppSync async Lambda resolvers https://community.aws/content/2hlqAp86YvckSS2DrVvZ1qdArqF/async-lambda-and-appsync?lang=en - AWS AppSync https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html - AWS Lambda https://docs.aws.amazon.com/lambda/latest/dg/welcome.html - Streaming a response from a Lambda function https://docs.aws.amazon.com/lambda/latest/dg/configuration-response-streaming.html - AWS AppSync sample code https://github.com/aws-samples/aws-appsync-resolver-samples - Michael (App Sync Developer Advocate) YouTube channel https://www.youtube.com/@focusotter/videos
Online identity is a ticking time bomb. Are trustworthy, open-source solutions ready to disarm it? Or will we be stuck with lackluster, proprietary systems?Sponsored By:Core Contributor Membership: Take $1 a month of your membership for a lifetime!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices!1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps.Support LINUX UnpluggedLinks:
Editor's note: One of the top reasons we have hundreds of companies and thousands of AI Engineers joining the World's Fair next week is, apart from discussing technology and being present for the big launches planned, to hire and be hired! Listeners loved our previous Elicit episode and were so glad to welcome 2 more members of Elicit back for a guest post (and bonus podcast) on how they think through hiring. Don't miss their AI engineer job description, and template which you can use to create your own hiring plan! How to Hire AI EngineersJames Brady, Head of Engineering @ Elicit (ex Spring, Square, Trigger.io, IBM)Adam Wiggins, Internal Journalist @ Elicit (Cofounder Ink & Switch and Heroku)If you're leading a team that uses AI in your product in some way, you probably need to hire AI engineers. As defined in this article, that's someone with conventional engineering skills in addition to knowledge of language models and prompt engineering, without being a full-fledged Machine Learning expert.But how do you hire someone with this skillset? At Elicit we've been applying machine learning to reasoning tools since 2018, and our technical team is a mix of ML experts and what we can now call AI engineers. This article will cover our process from job description through interviewing. (You can also flip the perspectives here and use it just as easily for how to get hired as an AI engineer!)My own journeyBefore getting into the brass tacks, I want to share my journey to becoming an AI engineer.Up until a few years ago, I was happily working my job as an engineering manager of a big team at a late-stage startup. Like many, I was tracking the rapid increase in AI capabilities stemming from the deep learning revolution, but it was the release of GPT-3 in 2020 which was the watershed moment. At the time, we were all blown away by how the model could string together coherent sentences on demand. (Oh how far we've come since then!)I'd been a professional software engineer for nearly 15 years—enough to have experienced one or two technology cycles—but I could see this was something categorically new. I found this simultaneously exciting and somewhat disconcerting. I knew I wanted to dive into this world, but it seemed like the only path was going back to school for a master's degree in Machine Learning. I started talking with my boss about options for taking a sabbatical or doing a part-time distance learning degree.In 2021, I instead decided to launch a startup focused on productizing new research ideas on ML interpretability. It was through that process that I reached out to Andreas—a leading ML researcher and founder of Elicit—to see if he would be an advisor. Over the next few months, I learned more about Elicit: that they were trying to apply these fascinating technologies to the real-world problems of science, and with a business model that aligned it with safety goals. I realized that I was way more excited about Elicit than I was about my own startup ideas, and wrote about my motivations at the time.Three years later, it's clear this was a seismic shift in my career on the scale of when I chose to leave my comfy engineering job at IBM to go through the Y Combinator program back in 2008. Working with this new breed of technology has been more intellectually stimulating, challenging, and rewarding than I could have imagined.Deep ML expertise not requiredIt's important to note that AI engineers are not ML experts, nor is that their best contribution to a tech team.In our article Living documents as an AI UX pattern, we wrote:It's easy to think that AI advancements are all about training and applying new models, and certainly this is a huge part of our work in the ML team at Elicit. But those of us working in the UX part of the team believe that we have a big contribution to make in how AI is applied to end-user problems.We think of LLMs as a new medium to work with, one that we've barely begun to grasp the contours of. New computing mediums like GUIs in the 1980s, web/cloud in the 90s and 2000s, and multitouch smartphones in the 2000s/2010s opened a whole new era of engineering and design practices. So too will LLMs open new frontiers for our work in the coming decade.To compare to the early era of mobile development: great iOS developers didn't require a detailed understanding of the physics of capacitive touchscreens. But they did need to know the capabilities and limitations of a multi-touch screen, the constrained CPU and storage available, the context in which the user is using it (very different from a webpage or desktop computer), etc.In the same way, an AI engineer needs to work with LLMs as a medium that is fundamentally different from other compute mediums. That means an interest in the ML side of things, whether through their own self-study, tinkering with prompts and model fine-tuning, or following along in #llm-paper-club. But this understanding is so that they can work with the medium effectively versus, say, spending their days training new models.Language models as a chaotic mediumSo if we're not expecting deep ML expertise from AI engineers, what are we expecting? This brings us to what makes LLMs different.We'll assume already that our ideal candidate is already inspired by, and full of ideas about, all the new capabilities AI can bring to software products. But the flip side is all the things that make this new medium difficult to work with. LLM calls are annoying due to high latency (measured in tens of seconds sometimes, rather than milliseconds), extreme variance on latency, high error rates even under normal operation. Not to mention getting extremely different answers to the same prompt provided to the same model on two subsequent calls!The net effect is that an AI engineer, even working at the application development level, needs to have a skillset comparable to distributed systems engineering. Handling errors, retries, asynchronous calls, streaming responses, parallelizing and recombining model calls, the halting problem, and fallbacks are just some of the day-in-the-life of an AI engineer. Chaos engineering gets new life in the era of AI.Skills and qualities in candidatesLet's put together what we don't need (deep ML expertise) with what we do (work with capabilities and limitations of the medium). Thus we start to see what Elicit looks for in AI engineers:* Conventional software engineering skills. Especially back-end engineering on complex, data-intensive applications.* Professional, real-world experience with applications at scale.* Deep, hands-on experience across a few back-end web frameworks.* Light devops and an understanding of infrastructure best practices.* Queues, message buses, event-driven and serverless architectures, … there's no single “correct” approach, but having a deep toolbox to draw from is very important.* A genuine curiosity and enthusiasm for the capabilities of language models.* One or more serious projects (side projects are fine) of using them in interesting ways on a unique domain.* …ideally with some level of factored cognition, e.g. breaking the problem down into chunks, making thoughtful decisions about which things to push to the language model and which stay within the realm of conventional heuristics and compute capabilities.* Personal studying with resources like Elicit's ML reading list. Part of the role is collaborating with the ML engineers and researchers on our team. To do so, the candidate needs to “speak their language” somewhat, just as a mobile engineer needs some familiarity with backends in order to collaborate effectively on API creation with backend engineers.* An understanding of the challenges that come along with working with large models (high latency, variance, etc.) leading to a defensive, fault-first mindset.* Careful and principled handling of error cases, asynchronous code (and ability to reason about and debug it), streaming data, caching, logging and analytics for understanding behavior in production.* This is a similar mindset that one can develop working on conventional apps which are complex, data-intensive, or large-scale apps. The difference is that an AI engineer will need this mindset even when working on relatively small scales!On net, a great AI engineer will combine two seemingly contrasting perspectives: knowledge of, and a sense of wonder for, the capabilities of modern ML models; but also the understanding that this is a difficult and imperfect foundation, and the willingness to build resilient and performant systems on top of it.Here's the resulting AI engineer job description for Elicit. And here's a template that you can borrow from for writing your own JD.Hiring processOnce you know what you're looking for in an AI engineer, the process is not too different from other technical roles. Here's how we do it, broken down into two stages: sourcing and interviewing.SourcingWe're primarily looking for people with (1) a familiarity with and interest in ML, and (2) proven experience building complex systems using web technologies. The former is important for culture fit and as an indication that the candidate will be able to do some light prompt engineering as part of their role. The latter is important because language model APIs are built on top of web standards and—as noted above—aren't always the easiest tools to work with.Only a handful of people have built complex ML-first apps, but fortunately the two qualities listed above are relatively independent. Perhaps they've proven (2) through their professional experience and have some side projects which demonstrate (1).Talking of side projects, evidence of creative and original prototypes is a huge plus as we're evaluating candidates. We've barely scratched the surface of what's possible to build with LLMs—even the current generation of models—so candidates who have been willing to dive into crazy “I wonder if it's possible to…” ideas have a huge advantage.InterviewingThe hard skills we spend most of our time evaluating during our interview process are in the “building complex systems using web technologies” side of things. We will be checking that the candidate is familiar with asynchronous programming, defensive coding, distributed systems concepts and tools, and display an ability to think about scaling and performance. They needn't have 10+ years of experience doing this stuff: even junior candidates can display an aptitude and thirst for learning which gives us confidence they'll be successful tackling the difficult technical challenges we'll put in front of them.One anti-pattern—something which makes my heart sink when I hear it from candidates—is that they have no familiarity with ML, but claim that they're excited to learn about it. The amount of free and easily-accessible resources available is incredible, so a motivated candidate should have already dived into self-study.Putting all that together, here's the interview process that we follow for AI engineer candidates:* 30-minute introductory conversation. Non-technical, explaining the interview process, answering questions, understanding the candidate's career path and goals.* 60-minute technical interview. This is a coding exercise, where we play product manager and the candidate is making changes to a little web app. Here are some examples of topics we might hit upon through that exercise:* Update API endpoints to include extra metadata. Think about appropriate data types. Stub out frontend code to accept the new data.* Convert a synchronous REST API to an asynchronous streaming endpoint.* Cancellation of asynchronous work when a user closes their tab.* Choose an appropriate data structure to represent the pending, active, and completed ML work which is required to service a user request.* 60–90 minute non-technical interview. Walk through the candidate's professional experience, identifying high and low points, getting a grasp of what kinds of challenges and environments they thrive in.* On-site interviews. Half a day in our office in Oakland, meeting as much of the team as possible: more technical and non-technical conversations.The frontier is wide openAlthough Elicit is perhaps further along than other companies on AI engineering, we also acknowledge that this is a brand-new field whose shape and qualities are only just now starting to form. We're looking forward to hearing how other companies do this and being part of the conversation as the role evolves.We're excited for the AI Engineer World's Fair as another next step for this emerging subfield. And of course, check out the Elicit careers page if you're interested in joining our team.Podcast versionTimestamps* [00:00:24] Intros* [00:05:25] Defining the Hiring Process* [00:08:42] Defensive AI Engineering as a chaotic medium* [00:10:26] Tech Choices for Defensive AI Engineering* [00:14:04] How do you Interview for Defensive AI Engineering* [00:19:25] Does Model Shadowing Work?* [00:22:29] Is it too early to standardize Tech stacks?* [00:32:02] Capabilities: Offensive AI Engineering* [00:37:24] AI Engineering Required Knowledge* [00:40:13] ML First Mindset* [00:45:13] AI Engineers and Creativity* [00:47:51] Inside of Me There Are Two Wolves* [00:49:58] Sourcing AI Engineers* [00:58:45] Parting ThoughtsTranscript[00:00:00] swyx: Okay, so welcome to the Latent Space Podcast. This is another remote episode that we're recording. This is the first one that we're doing around a guest post. And I'm very honored to have two of the authors of the post with me, James and Adam from Elicit. Welcome, James. Welcome, Adam.[00:00:22] James Brady: Thank you. Great to be here.[00:00:23] Hey there.[00:00:24] Intros[00:00:24] swyx: Okay, so I think I will do this kind of in order. I think James, you're, you're sort of the primary author. So James, you are head of engineering at Elicit. You also, We're VP Eng at Teespring and Spring as well. And you also , you have a long history in sort of engineering. How did you, , find your way into something like Elicit where, , it's, you, you are basically traditional sort of VP Eng, VP technology type person moving into a more of an AI role.[00:00:53] James Brady: Yeah, that's right. It definitely was something of a Sideways move if not a left turn. So the story there was I'd been doing, as you said, VP technology, CTO type stuff for around about 15 years or so, and Notice that there was this crazy explosion of capability and interesting stuff happening within AI and ML and language models, that kind of thing.[00:01:16] I guess this was in 2019 or so, and decided that I needed to get involved. , this is a kind of generational shift. And Spent maybe a year or so trying to get up to speed on the state of the art, reading papers, reading books, practicing things, that kind of stuff. Was going to found a startup actually in in the space of interpretability and transparency, and through that met Andreas, who has obviously been on the, on the podcast before asked him to be an advisor for my startup, and he countered with, maybe you'd like to come and run the engineering team at Elicit, which it turns out was a much better idea.[00:01:48] And yeah, I kind of quickly changed in that direction. So I think some of the stuff that we're going to be talking about today is how actually a lot of the work when you're building applications with AI and ML looks and smells and feels much more like conventional software engineering with a few key differences rather than really deep ML stuff.[00:02:07] And I think that's one of the reasons why I was able to transfer skills over from one place to the other.[00:02:12] swyx: Yeah, I[00:02:12] James Brady: definitely[00:02:12] swyx: agree with that. I, I do often say that I think AI engineering is about 90 percent software engineering with like the, the 10 percent of like really strong really differentiated AI engineering.[00:02:22] And that might, that obviously that number might change over time. I want to also welcome Adam onto my podcast because you welcomed me onto your podcast two years ago.[00:02:31] Adam Wiggins: Yeah, that was a wonderful episode.[00:02:32] swyx: That was, that was a fun episode. You famously founded Heroku. You just wrapped up a few years working on Muse.[00:02:38] And now you've described yourself as a journalist, internal journalist working on Elicit.[00:02:43] Adam Wiggins: Yeah, well I'm kind of a little bit in a wandering phase here and trying to take this time in between ventures to see what's out there in the world and some of my wandering took me to the Elicit team. And found that they were some of the folks who were doing the most interesting, really deep work in terms of taking the capabilities of language models and applying them to what I feel like are really important problems.[00:03:08] So in this case, science and literature search and, and, and that sort of thing. It fits into my general interest in tools and productivity software. I, I think of it as a tool for thought in many ways, but a tool for science, obviously, if we can accelerate that discovery of new medicines and things like that, that's, that's just so powerful.[00:03:24] But to me, it's a. It's kind of also an opportunity to learn at the feet of some real masters in this space, people who have been working on it since it was, before it was cool, if you want to put it that way. So for me, the last couple of months have been this crash course, and why I sometimes describe myself as an internal journalist is I'm helping to write some, some posts, including Supporting James in this article here we're doing for latent space where I'm just bringing my writing skill and that sort of thing to bear on their very deep domain expertise around language models and applying them to the real world and kind of surface that in a way that's I don't know, accessible, legible, that, that sort of thing.[00:04:03] And so, and the great benefit to me is I get to learn this stuff in a way that I don't think I would, or I haven't, just kind of tinkering with my own side projects.[00:04:12] swyx: I forgot to mention that you also run Ink and Switch, which is one of the leading research labs, in my mind, of the tools for thought productivity space, , whatever people mentioned there, or maybe future of programming even, a little bit of that.[00:04:24] As well. I think you guys definitely started the local first wave. I think there was just the first conference that you guys held. I don't know if you were personally involved.[00:04:31] Adam Wiggins: Yeah, I was one of the co organizers along with a few other folks for, yeah, called Local First Conf here in Berlin.[00:04:36] Huge success from my, my point of view. Local first, obviously, a whole other topic we can talk about on another day. I think there actually is a lot more what would you call it , handshake emoji between kind of language models and the local first data model. And that was part of the topic of the conference here, but yeah, topic for another day.[00:04:55] swyx: Not necessarily. I mean , I, I selected as one of my keynotes, Justine Tunney, working at LlamaFall in Mozilla, because I think there's a lot of people interested in that stuff. But we can, we can focus on the headline topic. And just to not bury the lead, which is we're talking about hire, how to hire AI engineers, this is something that I've been looking for a credible source on for months.[00:05:14] People keep asking me for my opinions. I don't feel qualified to give an opinion and it's not like I have. So that's kind of defined hiring process that I'm super happy with, even though I've worked with a number of AI engineers.[00:05:25] Defining the Hiring Process[00:05:25] swyx: I'll just leave it open to you, James. How was your process of defining your hiring, hiring roles?[00:05:31] James Brady: Yeah. So I think the first thing to say is that we've effectively been hiring for this kind of a role since before you, before you coined the term and tried to kind of build this understanding of what it was.[00:05:42] So, which is not a bad thing. Like it's, it was a, it was a good thing. A concept, a concept that was coming to the fore and effectively needed a name, which is which is what you did. So the reason I mentioned that is I think it was something that we kind of backed into, if you will. We didn't sit down and come up with a brand new role from, from scratch of this is a completely novel set of responsibilities and skills that this person would need.[00:06:06] However, it is a A kind of particular blend of different skills and attitudes and and curiosities interests, which I think makes sense to kind of bundle together. So in the, in the post, the three things that we say are most important for a highly effective AI engineer are first of all, conventional software engineering skills, which is Kind of a given, but definitely worth mentioning.[00:06:30] The second thing is a curiosity and enthusiasm for machine learning and maybe in particular language models. That's certainly true in our case. And then the third thing is to do with basically a fault first mindset, being able to build systems that can handle things going wrong in, in, in some sense.[00:06:49] And yeah, the I think the kind of middle point, the curiosity about ML and language models is probably fairly self evident. They're going to be working with, and prompting, and dealing with the responses from these models, so that's clearly relevant. The last point, though, maybe takes the most explaining.[00:07:07] To do with this fault first mindset and the ability to, to build resilient systems. The reason that is, is so important is because compared to normal APIs, where normal, think of something like a Stripe API or a search API or something like this. The latency when you're working with language models is, is wild, like you can get 10x variation.[00:07:32] I mean, I was looking at the stats before, actually, before, before the podcast. We do often, normally, in fact, see a 10x variation in the P90 latency over the course of, Half an hour, an hour when we're prompting these models, which is way higher than if you're working with a, more kind of conventional conventionally backed API.[00:07:49] And the responses that you get, the actual content and the responses are naturally unpredictable as well. They come back with different formats. Maybe you're expecting JSON. It's not quite JSON. You have to handle this stuff. And also the, the semantics of the messages are unpredictable too, which is, which is a good thing.[00:08:08] Like this is one of the things that you're looking for from these language models, but it all adds up to needing to. Build a resilient, reliable, solid feeling system on top of this fundamentally, well, certainly currently fundamentally shaky foundation. The models do not behave in the way that you would like them to.[00:08:28] And yeah, the ability to structure the code around them such that it does give the user this warm, reassuring, Snappy, solid feeling is is really what we're driving for there.[00:08:42] Defensive AI Engineering as a chaotic medium[00:08:42] Adam Wiggins: What really struck me as we, we dug in on the content for this article was that third point there. The, the language models is this kind of chaotic medium, this, this dragon, this wild horse you're, you're, you're riding and trying to guide in the direction that is going to be useful and reliable to users, because I think.[00:08:58] So much of software engineering is about making things not only high performance and snappy, but really just making it stable, reliable, predictable, which is literally the opposite of what you get from from the language models. And yet, yeah, the output is so useful, and indeed, some of their Creativity, if you want to call it that, which is, is precisely their value.[00:09:19] And so you need to work with this medium. And I guess the nuanced or the thing that came out of Elissa's experience that I thought was so interesting is quite a lot of working with that is things that come from distributed systems engineering. But you have really the AI engineers as we're defining them or, or labeling them on the illicit team is people who are really application developers.[00:09:39] You're building things for end users. You're thinking about, okay, I need to populate this interface with some response to user input. That's useful to the tasks they're trying to do, but you have this. This is the thing, this medium that you're working with that in some ways you need to apply some of this chaos engineering, distributed systems engineering, which typically those people with those engineering skills are not kind of the application level developers with the product mindset or whatever, they're more deep in the guts of a, of a system.[00:10:07] And so it's, those, those skills and, and knowledge do exist throughout the engineering discipline, but sort of putting them together into one person that is That feels like sort of a unique thing and working with the folks on the Elicit team who have that skills I'm quite struck by that unique that unique blend.[00:10:23] I haven't really seen that before in my 30 year career in technology.[00:10:26] Tech Choices for Defensive AI Engineering[00:10:26] swyx: Yeah, that's a Fascinating I like the reference to chaos engineering. I have some appreciation, I think when you had me on your podcast, I was still working at Temporal and that was like a nice Framework, if you live within Temporal's boundaries, you can pretend that all those faults don't exist, and you can, you can code in a sort of very fault tolerant way.[00:10:47] What is, what is you guys solutions around this, actually? Like, I think you're, you're emphasizing having the mindset, but maybe naming some technologies would help? Not saying that you have to adopt these technologies, but they're just, they're just quick vectors into what you're talking about when you're, when you're talking about distributed systems.[00:11:03] Like, that's such a big, chunky word, , like are we talking, are Kubernetes or, and I suspect we're not, , like we're, we're talking something else now.[00:11:10] James Brady: Yeah, that's right. It's more at the application level rather than at the infrastructure level, at least, at least the way that it works for us.[00:11:17] So there's nothing kind of radically novel here. It is more a careful application of existing concepts. So the kinds of tools that we reach for to handle these kind of slightly chaotic objects that Adam was just talking about, are retries and fallbacks and timeouts and careful error handling. And, yeah, the standard stuff, really.[00:11:39] There's also a great degree of dependence. We rely heavily on parallelization because, , these language models are not innately very snappy, and , there's just a lot of I. O. going back and forth. So All these things I'm talking about when I was in my earlier stages of a career, these are kind of the things that are the difficult parts that most senior software engineers will be better at.[00:12:01] It is careful error handling, and concurrency, and fallbacks, and distributed systems, and, , eventual consistency, and all this kind of stuff and As Adam was saying, the kind of person that is deep in the guts of some kind of distributed systems, a really high, high scale backend kind of a problem would probably naturally have these kinds of skills.[00:12:21] But you'll find them on, on day one, if you're building a, , an ML powered app, even if it's not got massive scale. I think one one thing that I would mention that we do do yeah, maybe, maybe two related things, actually. The first is we're big fans of strong typing. We share the types all the way from the Backend Python code all the way to the to the front end in TypeScript and find that is I mean We'd probably do this anyway But it really helps one reason around the shapes of the data which can going to be going back and forth and that's really important When you can't rely upon You you're going to have to coerce the data that you get back from the ML if you want if you want for it to be structured basically speaking and The second thing which is related is we use checked exceptions inside our Python code base, which means that we can use the type system to make sure we are handling, properly handling, all of the, the various things that could be going wrong, all the different exceptions that could be getting raised.[00:13:16] So, checked exceptions are not, not really particularly popular. Actually there's not many people that are big fans of them. For our particular use case, to really make sure that we've not just forgotten to handle, , This particular type of error we have found them useful to to, to force us to think about all the different edge cases that can come up.[00:13:32] swyx: Fascinating. How just a quick note of technology. How do you share types from Python to TypeScript? Do you, do you use GraphQL? Do you use something[00:13:39] James Brady: else? We don't, we don't use GraphQL. Yeah. So we've got the We've got the types defined in Python, that's the source of truth. And we go from the OpenAPI spec, and there's a, there's a tool that you work and use to generate types dynamically, like TypeScript types from those OpenAPI definitions.[00:13:57] swyx: Okay, excellent. Okay, cool. Sorry, sorry for diving into that rabbit hole a little bit. I always like to spell out technologies for people to dig their teeth into.[00:14:04] How do you Interview for Defensive AI Engineering[00:14:04] swyx: One thing I'll, one thing I'll mention quickly is that a lot of the stuff that you mentioned is typically not part of the normal interview loop.[00:14:10] It's actually really hard to interview for because this is the stuff that you polish out in, as you go into production, the coding interviews are typically about the happy path. How do we do that? How do we, how do we design, how do you look for a defensive fault first mindset?[00:14:24] Because you can defensive code all day long and not add functionality. to your to your application.[00:14:29] James Brady: Yeah, it's a great question and I think that's exactly true. Normally the interview is about the happy path and then there's maybe a box checking exercise at the end of the candidate says of course in reality I would handle the edge cases or something like this and that unfortunately isn't isn't quite good enough when when the happy path is is very very narrow and yeah there's lots of weirdness on either side so basically speaking, it's just a case of, of foregrounding those kind of concerns through the interview process.[00:14:58] It's, there's, there's no magic to it. We, we talk about this in the, in the po in the post that we're gonna be putting up on, on Laton space. The, there's two main technical exercises that we do through our interview process for this role. The first is more coding focus, and the second is more system designy.[00:15:16] Yeah. White whiteboarding a potential solution. And in, without giving too much away in the coding exercise. You do need to think about edge cases. You do need to think about errors. The exercise consists of adding features and fixing bugs inside the code base. And in both of those two cases, it does demand, because of the way that we set the application up and the interview up, it does demand that you think about something other than the happy path.[00:15:41] But your thinking is the right prompt of how do we get the candidate thinking outside of the, the kind of normal Sweet spot, smooth smooth, smoothly paved path. In terms of the system design interview, that's a little easier to prompt this kind of fault first mindset because it's very easy in that situation just to say, let's imagine that, , this node dies, how does the app still work?[00:16:03] Let's imagine that this network is, is going super slow. Let's imagine that, I don't know, like you, you run out of, you run out of capacity in, in, in this database that you've sketched out here, how do you handle that, that, that sort of stuff. So. It's, in both cases, they're not firmly anchored to and built specifically around language models and ways language models can go wrong, but we do exercise the same muscles of thinking defensively and yeah, foregrounding the edge cases, basically.[00:16:32] Adam Wiggins: James, earlier there you mentioned retries. And this is something that I think I've seen some interesting debates internally about things regarding, first of all, retries are, can be costly, right? In general, this medium, in addition to having this incredibly high variance and response rate, and, , being non deterministic, is actually quite expensive.[00:16:50] And so, in many cases, doing a retry when you get a fail does make sense, but actually that has an impact on cost. And so there is Some sense to which, at least I've seen the AI engineers on our team, worry about that. They worry about, okay, how do we give the best user experience, but balance that against what the infrastructure is going to, , is going to cost our company, which I think is again, an interesting mix of, yeah, again, it's a little bit the distributed system mindset, but it's also a product perspective and you're thinking about the end user experience, but also the.[00:17:22] The bottom line for the business, you're bringing together a lot of a lot of qualities there. And there's also the fallback case, which is kind of, kind of a related or adjacent one. I think there was also a discussion on that internally where, I think it maybe was search, there was something recently where there was one of the frontline search providers was having some, yeah, slowness and outages, and essentially then we had a fallback, but essentially that gave people for a while, especially new users that come in that don't the difference, they're getting a They're getting worse results for their search.[00:17:52] And so then you have this debate about, okay, there's sort of what is correct to do from an engineering perspective, but then there's also what actually is the best result for the user. Is giving them a kind of a worse answer to their search result better, or is it better to kind of give them an error and be like, yeah, sorry, it's not working right at the moment, try again.[00:18:12] Later, both are obviously non optimal, but but this is the kind of thing I think that that you run into or, or the kind of thing we need to grapple with a lot more than you would other kinds of, of mediums.[00:18:24] James Brady: Yeah, that's a really good example. I think it brings to the fore the two different things that you could be optimizing for of uptime and response at all costs on one end of the spectrum and then effectively fragility, but kind of, if you get a response, it's the best response we can come up with at the other end of the spectrum.[00:18:43] And where you want to land there kind of depends on, well, it certainly depends on the app, obviously depends on the user. I think it depends on the, feature within the app as well. So in the search case that you, that you mentioned there, in retrospect, we probably didn't want to have the fallback. And we've actually just recently on Monday, changed that to Show an error message rather than giving people a kind of degraded experience in other situations We could use for example a large language model from a large language model from provider B rather than provider A and Get something which is within the A few percentage points performance, and that's just a really different situation.[00:19:21] So yeah, like any interesting question, the answer is, it depends.[00:19:25] Does Model Shadowing Work?[00:19:25] swyx: I do hear a lot of people suggesting I, let's call this model shadowing as a defensive technique, which is, if OpenAI happens to be down, which, , happens more often than people think then you fall back to anthropic or something.[00:19:38] How realistic is that, right? Like you, don't you have to develop completely different prompts for different models and won't the, won't the performance of your application suffer from whatever reason, right? Like it may be caused differently or it's not maintained in the same way. I, I think that people raise this idea of fallbacks to models, but I don't think it's, I don't, I don't see it practiced very much.[00:20:02] James Brady: Yeah, it is, you, you definitely need to have a different prompt if you want to stay within a few percentage points degradation Like I, like I said before, and that certainly comes at a cost, like fallbacks and backups and things like this It's really easy for them to go stale and kind of flake out on you because they're off the beaten track And In our particular case inside of Elicit, we do have fallbacks for a number of kind of crucial functions where it's going to be very obvious if something has gone wrong, but we don't have fallbacks in all cases.[00:20:40] It really depends on a task to task basis throughout the app. So I can't give you a kind of a, a single kind of simple rule of thumb for, in this case, do this. And in the other, do that. But yeah, we've it's a little bit easier now that the APIs between the anthropic models and opening are more similar than they used to be.[00:20:59] So we don't have two totally separate code paths with different protocols, like wire protocols to, to speak, which makes things easier, but you're right. You do need to have different prompts if you want to, have similar performance across the providers.[00:21:12] Adam Wiggins: I'll also note, just observing again as a relative newcomer here, I was surprised, impressed, not sure what the word is for it, at the blend of different backends that the team is using.[00:21:24] And so there's many The product presents as kind of one single interface, but there's actually several dozen kind of main paths. There's like, for example, the search versus a data extraction of a certain type, versus chat with papers, versus And each one of these, , the team has worked very hard to pick the right Model for the job and craft the prompt there, but also is constantly testing new ones.[00:21:48] So a new one comes out from either, from the big providers or in some cases, Our own models that are , running on, on essentially our own infrastructure. And sometimes that's more about cost or performance, but the point is kind of switching very fluidly between them and, and very quickly because this field is moving so fast and there's new ones to choose from all the time is like part of the day to day, I would say.[00:22:11] So it isn't more of a like, there's a main one, it's been kind of the same for a year, there's a fallback, but it's got cobwebs on it. It's more like which model and which prompt is changing weekly. And so I think it's quite, quite reasonable to to, to, to have a fallback that you can expect might work.[00:22:29] Is it too early to standardize Tech stacks?[00:22:29] swyx: I'm curious because you guys have had experience working at both, , Elicit, which is a smaller operation and, and larger companies. A lot of companies are looking at this with a certain amount of trepidation as, as, , it's very chaotic. When you have, when you have , one engineering team that, that, knows everyone else's names and like, , they, they, they, they meet constantly in Slack and knows what's going on.[00:22:50] It's easier to, to sync on technology choices. When you have a hundred teams, all shipping AI products and all making their own independent tech choices. It can be, it can be very hard to control. One solution I'm hearing from like the sales forces of the worlds and Walmarts of the world is that they are creating their own AI gateway, right?[00:23:05] Internal AI gateway. This is the one model hub that controls all the things and has our standards. Is that a feasible thing? Is that something that you would want? Is that something you have and you're working towards? What are your thoughts on this stuff? Like, Centralization of control or like an AI platform internally.[00:23:22] James Brady: Certainly for larger organizations and organizations that are doing things which maybe are running into HIPAA compliance or other, um, legislative tools like that. It could make a lot of sense. Yeah. I think for the TLDR for something like Elicit is we are small enough, as you indicated, and need to have full control over all the levers available and switch between different models and different prompts and whatnot, as Adam was just saying, that that kind of thing wouldn't work for us.[00:23:52] But yeah, I've spoken with and, um, advised a couple of companies that are trying to sell into that kind of a space or at a larger stage, and it does seem to make a lot of sense for them. So, for example, if you're trying to sell If you're looking to sell to a large enterprise and they cannot have any data leaving the EU, then you need to be really careful about someone just accidentally putting in, , the sort of US East 1 GPT 4 endpoints or something like this.[00:24:22] I'd be interested in understanding better what the specific problem is that they're looking to solve with that, whether it is to do with data security or centralization of billing, or if they have a kind of Suite of prompts or something like this that people can choose from so they don't need to reinvent the wheel again and again I wouldn't be able to say without understanding the problems and their proposed solutions , which kind of situations that be better or worse fit for but yeah for illicit where really the The secret sauce, if there is a secret sauce, is which models we're using, how we're using them, how we're combining them, how we're thinking about the user problem, how we're thinking about all these pieces coming together.[00:25:02] You really need to have all of the affordances available to you to be able to experiment with things and iterate rapidly. And generally speaking, whenever you put these kind of layers of abstraction and control and generalization in there, that, that gets in the way. So, so for us, it would not work.[00:25:19] Adam Wiggins: Do you feel like there's always a tendency to want to reach for standardization and abstractions pretty early in a new technology cycle?[00:25:26] There's something comforting there, or you feel like you can see them, or whatever. I feel like there's some of that discussion around lang chain right now. But yeah, this is not only so early, but also moving so fast. , I think it's . I think it's tough to, to ask for that. That's, that's not the, that's not the space we're in, but the, yeah, the larger an organization, the more that's your, your default is to, to, to want to reach for that.[00:25:48] It, it, it's a sort of comfort.[00:25:51] swyx: Yeah, I find it interesting that you would say that , being a founder of Heroku where , you were one of the first platforms as a service that more or less standardized what, , that sort of early developer experience should have looked like.[00:26:04] And I think basically people are feeling the differences between calling various model lab APIs and having an actual AI platform where. , all, all their development needs are thought of for them. , it's, it's very much, and, and I, I defined this in my AI engineer post as well.[00:26:19] Like the model labs just see their job ending at serving models and that's about it. But actually the responsibility of the AI engineer has to fill in a lot of the gaps beyond that. So.[00:26:31] Adam Wiggins: Yeah, that's true. I think, , a huge part of the exercise with Heroku, which It was largely inspired by Rails, which itself was one of the first frameworks to standardize the SQL database.[00:26:42] And people had been building apps like that for many, many years. I had built many apps. I had made my own templates based on that. I think others had done it. And Rails came along at the right moment. We had been doing it long enough that you see the patterns and then you can say look let's let's extract those into a framework that's going to make it not only easier to build for the experts but for people who are relatively new the best practices are encoded into you.[00:27:07] That framework, , Model View Controller, to take one example. But then, yeah, once you see that, and once you experience the power of a framework, and again, it's so comforting, and you can develop faster, and it's easier to onboard new people to it because you have these standards. And this consistency, then folks want that for something new that's evolving.[00:27:29] Now here I'm thinking maybe if you fast forward a little to, for example, when React came on the on the scene, , a decade ago or whatever. And then, okay, we need to do state management. What's that? And then there's, , there's a new library every six months. Okay, this is the one, this is the gold standard.[00:27:42] And then, , six months later, that's deprecated. Because of course, it's evolving, you need to figure it out, like the tacit knowledge and the experience of putting it in practice and seeing what those real What those real needs are are, are critical, and so it's, it is really about finding the right time to say yes, we can generalize, we can make standards and abstractions, whether it's for a company, whether it's for, , a library, an open source library, for a whole class of apps and it, it's very much a, much more of a A judgment call slash just a sense of taste or , experience to be able to say, Yeah, we're at the right point.[00:28:16] We can standardize this. But it's at least my, my very, again, and I'm so new to that, this world compared to you both, but my, my sense is, yeah, still the wild west. That's what makes it so exciting and feels kind of too early for too much. too much in the way of standardized abstractions. Not that it's not interesting to try, but , you can't necessarily get there in the same way Rails did until you've got that decade of experience of whatever building different classes of apps in that, with that technology.[00:28:45] James Brady: Yeah, it's, it's interesting to think about what is going to stay more static and what is expected to change over the coming five years, let's say. Which seems like when I think about it through an ML lens, it's an incredibly long time. And if you just said five years, it doesn't seem, doesn't seem that long.[00:29:01] I think that, that kind of talks to part of the problem here is that things that are moving are moving incredibly quickly. I would expect, this is my, my hot take rather than some kind of official carefully thought out position, but my hot take would be something like the You can, you'll be able to get to good quality apps without doing really careful prompt engineering.[00:29:21] I don't think that prompt engineering is going to be a kind of durable differential skill that people will, will hold. I do think that, The way that you set up the ML problem to kind of ask the right questions, if you see what I mean, rather than the specific phrasing of exactly how you're doing chain of thought or few shot or something in the prompt I think the way that you set it up is, is probably going to be remain to be trickier for longer.[00:29:47] And I think some of the operational challenges that we've been talking about of wild variations in, in, in latency, And handling the, I mean, one way to think about these models is the first lesson that you learn when, when you're an engineer, software engineer, is that you need to sanitize user input, right?[00:30:05] It was, I think it was the top OWASP security threat for a while. Like you, you have to sanitize and validate user input. And we got used to that. And it kind of feels like this is the, The shell around the app and then everything else inside you're kind of in control of and you can grasp and you can debug, etc.[00:30:22] And what we've effectively done is, through some kind of weird rearguard action, we've now got these slightly chaotic things. I think of them more as complex adaptive systems, which , related but a bit different. Definitely have some of the same dynamics. We've, we've injected these into the foundations of the, of the app and you kind of now need to think with this defined defensive mindset downwards as well as upwards if you, if you see what I mean.[00:30:46] So I think it would gonna, it's, I think it will take a while for us to truly wrap our heads around that. And also these kinds of problems where you have to handle things being unreliable and slow sometimes and whatever else, even if it doesn't happen very often, there isn't some kind of industry wide accepted way of handling that at massive scale.[00:31:10] There are definitely patterns and anti patterns and tools and whatnot, but it's not like this is a solved problem. So I would expect that it's not going to go down easily as a, as a solvable problem at the ML scale either.[00:31:23] swyx: Yeah, excellent. I would describe in, in the terminology of the stuff that I've written in the past, I describe this inversion of architecture as sort of LLM at the core versus LLM or code at the core.[00:31:34] We're very used to code at the core. Actually, we can scale that very well. When we build LLM core apps, we have to realize that the, the central part of our app that's orchestrating things is actually prompt, prone to, , prompt injections and non determinism and all that, all that good stuff.[00:31:48] I, I did want to move the conversation a little bit from the sort of defensive side of things to the more offensive or, , the fun side of things, capabilities side of things, because that is the other part. of the job description that we kind of skimmed over. So I'll, I'll repeat what you said earlier.[00:32:02] Capabilities: Offensive AI Engineering[00:32:02] swyx: It's, you want people to have a genuine curiosity and enthusiasm for the capabilities of language models. We just, we're recording this the day after Anthropic just dropped Cloud 3. 5. And I was wondering, , maybe this is a good, good exercise is how do people have Curiosity and enthusiasm for capabilities language models when for example the research paper for cloud 3.[00:32:22] 5 is four pages[00:32:23] James Brady: Maybe that's not a bad thing actually in this particular case So yeah If you really want to know exactly how the sausage was made That hasn't been possible for a few years now in fact for for these new models but from our perspective as when we're building illicit What we primarily care about is what can these models do?[00:32:41] How do they perform on the tasks that we already have set up and the evaluations we have in mind? And then on a slightly more expansive note, what kinds of new capabilities do they seem to have? Can we elicit, no pun intended, from the models? For example, well, there's, there's very obvious ones like multimodality , there wasn't that and then there was that, or it could be something a bit more subtle, like it seems to be getting better at reasoning, or it seems to be getting better at metacognition, or Or it seems to be getting better at marking its own work and giving calibrated confidence estimates, things like this.[00:33:19] So yeah, there's, there's plenty to be excited about there. It's just that yeah, there's rightly or wrongly been this, this, this shift over the last few years to not give all the details. So no, but from application development perspective we, every time there's a new model release, there's a flow of activity in our Slack, and we try to figure out what's going on.[00:33:38] What it can do, what it can't do, run our evaluation frameworks, and yeah, it's always an exciting, happy day.[00:33:44] Adam Wiggins: Yeah, from my perspective, what I'm seeing from the folks on the team is, first of all, just awareness of the new stuff that's coming out, so that's, , an enthusiasm for the space and following along, and then being able to very quickly, partially that's having Slack to do this, but be able to quickly map that to, okay, What does this do for our specific case?[00:34:07] And that, the simple version of that is, let's run the evaluation framework, which Lissa has quite a comprehensive one. I'm actually working on an article on that right now, which I'm very excited about, because it's a very interesting world of things. But basically, you can just try, not just, but try the new model in the evaluations framework.[00:34:27] Run it. It has a whole slew of benchmarks, which includes not just Accuracy and confidence, but also things like performance, cost, and so on. And all of these things may trade off against each other. Maybe it's actually, it's very slightly worse, but it's way faster and way cheaper, so actually this might be a net win, for example.[00:34:46] Or, it's way more accurate. But that comes at its slower and higher cost, and so now you need to think about those trade offs. And so to me, coming back to the qualities of an AI engineer, especially when you're trying to hire for them, It's this, it's, it is very much an application developer in the sense of a product mindset of What are our users or our customers trying to do?[00:35:08] What problem do they need solved? Or what what does our product solve for them? And how does the capabilities of a particular model potentially solve that better for them than what exists today? And by the way, what exists today is becoming an increasingly gigantic cornucopia of things, right? And so, You say, okay, this new model has these capabilities, therefore, , the simple version of that is plug it into our existing evaluations and just look at that and see if it, it seems like it's better for a straight out swap out, but when you talk about, for example, you have multimodal capabilities, and then you say, okay, wait a minute, actually, maybe there's a new feature or a whole new There's a whole bunch of ways we could be using it, not just a simple model swap out, but actually a different thing we could do that we couldn't do before that would have been too slow, or too inaccurate, or something like that, that now we do have the capability to do.[00:35:58] I think of that as being a great thing. I don't even know if I want to call it a skill, maybe it's even like an attitude or a perspective, which is a desire to both be excited about the new technology, , the new models and things as they come along, but also holding in the mind, what does our product do?[00:36:16] Who is our user? And how can we connect the capabilities of this technology to how we're helping people in whatever it is our product does?[00:36:25] James Brady: Yeah, I'm just looking at one of our internal Slack channels where we talk about things like new new model releases and that kind of thing And it is notable looking through these the kind of things that people are excited about and not It's, I don't know the context, the context window is much larger, or it's, look at how many parameters it has, or something like this.[00:36:44] It's always framed in terms of maybe this could be applied to that kind of part of Elicit, or maybe this would open up this new possibility for Elicit. And, as Adam was saying, yeah, I don't think it's really a I don't think it's a novel or separate skill, it's the kind of attitude I would like to have all engineers to have at a company our stage, actually.[00:37:05] And maybe more generally, even, which is not just kind of getting nerd sniped by some kind of technology number, fancy metric or something, but how is this actually going to be applicable to the thing Which matters in the end. How is this going to help users? How is this going to help move things forward strategically?[00:37:23] That kind of, that kind of thing.[00:37:24] AI Engineering Required Knowledge[00:37:24] swyx: Yeah, applying what , I think, is, is, is the key here. Getting hands on as well. I would, I would recommend a few resources for people listening along. The first is Elicit's ML reading list, which I, I found so delightful after talking with Andreas about it.[00:37:38] It looks like that's part of your onboarding. We've actually set up an asynchronous paper club instead of my discord for people following on that reading list. I love that you separate things out into tier one and two and three, and that gives people a factored cognition way of Looking into the, the, the corpus, right?[00:37:55] Like yes, the, the corpus of things to know is growing and the water is slowly rising as far as what a bar for a competent AI engineer is. But I think, , having some structured thought as to what are the big ones that everyone must know I think is, is, is key. It's something I, I haven't really defined for people and I'm, I'm glad that this is actually has something out there that people can refer to.[00:38:15] Yeah, I wouldn't necessarily like make it required for like the job. Interview maybe, but , it'd be interesting to see like, what would be a red flag. If some AI engineer would not know, I don't know what, , I don't know where we would stoop to, to call something required knowledge, , or you're not part of the cool kids club.[00:38:33] But there increasingly is something like that, right? Like, not knowing what context is, is a black mark, in my opinion, right?[00:38:40] I think it, I think it does connect back to what we were saying before of this genuine Curiosity about and that. Well, maybe it's, maybe it's actually that combined with something else, which is really important, which is a self starting bias towards action, kind of a mindset, which again, everybody needs.[00:38:56] Exactly. Yeah. Everyone needs that. So if you put those two together, or if I'm truly curious about this and I'm going to kind of figure out how to make things happen, then you end up with people. Reading, reading lists, reading papers, doing side projects, this kind of, this kind of thing. So it isn't something that we explicitly included.[00:39:14] We don't have a, we don't have an ML focused interview for the AI engineer role at all, actually. It doesn't really seem helpful. The skills which we are checking for, as I mentioned before, this kind of fault first mindset. And conventional software engineering kind of thing. It's, it's 0. 1 and 0.[00:39:32] 3 on the list that, that we talked about. In terms of checking for ML curiosity and there are, how familiar they are with these concepts. That's more through talking interviews and culture fit types of things. We want for them to have a take on what Elisa is doing. doing, certainly as they progress through the interview process.[00:39:50] They don't need to be completely up to date on everything we've ever done on day zero. Although, , that's always nice when it happens. But for them to really engage with it, ask interesting questions, and be kind of bought into our view on how we want ML to proceed. I think that is really important, and that would reveal that they have this kind of this interest, this ML curiosity.[00:40:13] ML First Mindset[00:40:13] swyx: There's a second aspect to that. I don't know if now's the right time to talk about it, which is, I do think that an ML first approach to building software is something of a different mindset. I could, I could describe that a bit now if that, if that seems good, but yeah, I'm a team. Okay. So yeah, I think when I joined Elicit, this was the biggest adjustment that I had to make personally.[00:40:37] So as I said before, I'd been, Effectively building conventional software stuff for 15 years or so, something like this, well, for longer actually, but professionally for like 15 years. And had a lot of pattern matching built into my brain and kind of muscle memory for if you see this kind of problem, then you do that kind of a thing.[00:40:56] And I had to unlearn quite a lot of that when joining Elicit because we truly are ML first and try to use ML to the fullest. And some of the things that that means is, This relinquishing of control almost, at some point you are calling into this fairly opaque black box thing and hoping it does the right thing and dealing with the stuff that it sends back to you.[00:41:17] And that's very different if you're interacting with, again, APIs and databases, that kind of a, that kind of a thing. You can't just keep on debugging. At some point you hit this, this obscure wall. And I think the second, the second part to this is the pattern I was used to is that. The external parts of the app are where most of the messiness is, not necessarily in terms of code, but in terms of degrees of freedom, almost.[00:41:44] If the user can and will do anything at any point, and they'll put all sorts of wonky stuff inside of text inputs, and they'll click buttons you didn't expect them to click, and all this kind of thing. But then by the time you're down into your SQL queries, for example, as long as you've done your input validation, things are pretty pretty well defined.[00:42:01] And that, as we said before, is not really the case. When you're working with language models, there is this kind of intrinsic uncertainty when you get down to the, to the kernel, down to the core. Even, even beyond that, there's all that stuff is somewhat defensive and these are things to be wary of to some degree.[00:42:18] Though the flip side of that, the really kind of positive part of taking an ML first mindset when you're building applications is that you, If you, once you get comfortable taking your hands off the wheel at a certain point and relinquishing control, letting go then really kind of unexpected powerful things can happen if you lean on the, if you lean on the capabilities of the model without trying to overly constrain and slice and dice problems with to the point where you're not really wringing out the most capability from the model that you, that you might.[00:42:47] So, I was trying to think of examples of this earlier, and one that came to mind was we were working really early when just after I joined Elicit, we were working on something where we wanted to generate text and include citations embedded within it. So it'd have a claim, and then a, , square brackets, one, in superscript, something, something like this.[00:43:07] And. Every fiber in my, in my, in my being was screaming that we should have some way of kind of forcing this to happen or Structured output such that we could guarantee that this citation was always going to be present later on that the kind of the indication of a footnote would actually match up with the footnote itself and Kind of went into this symbolic.[00:43:28] I need full control kind of kind of mindset and it was notable that Andreas Who's our CEO, again, has been on the podcast, was was the opposite. He was just kind of, give it a couple of examples and it'll probably be fine. And then we can kind of figure out with a regular expression at the end. And it really did not sit well with me, to be honest.[00:43:46] I was like, but it could say anything. I could say, it could literally say anything. And I don't know about just using a regex to sort of handle this. This is a potent feature of the app. But , this is that was my first kind of, , The starkest introduction to this ML first mindset, I suppose, which Andreas has been cultivating for much longer than me, much longer than most, of yeah, there might be some surprises of stuff you get back from the model, but you can also It's about finding the sweet spot, I suppose, where you don't want to give a completely open ended prompt to the model and expect it to do exactly the right thing.[00:44:25] You can ask it too much and it gets confused and starts repeating itself or goes around in loops or just goes off in a random direction or something like this. But you can also over constrain the model. And not really make the most of the, of the capabilities. And I think that is a mindset adjustment that most people who are coming into AI engineering afresh would need to make of yeah, giving up control and expecting that there's going to be a little bit of kind of extra pain and defensive stuff on the tail end, but the benefits that you get as a, as a result are really striking.[00:44:58] The ML first mindset, I think, is something that I struggle with as well, because the errors, when they do happen, are bad. , they will hallucinate, and your systems will not catch it sometimes if you don't have large enough of a sample set.[00:45:13] AI Engineers and Creativity[00:45:13] swyx: I'll leave it open to you, Adam. What else do you think about when you think about curiosity and exploring capabilities?[00:45:22] Do people are there reliable ways to get people to push themselves? for joining us on Capabilities, because I think a lot of times we have this implicit overconfidence, maybe, of we think we know what it is, what a thing is, when actually we don't, and we need to keep a more open mind, and I think you do a particularly good job of Always having an open mind, and I want to get that out of more engineers that I talk to, but I, I, I, I struggle sometimes.[00:45:45] Adam Wiggins: I suppose being an engineer is, at its heart, this sort of contradiction of, on one hand, yeah,
We're back for Episode 122! In this episode Cody and Eric catch up on the news + Battle Of The Systems: Gradius 3 (SNES) vs. Thunderforce III (Genesis) We are doing news for the first monthly episode and then "catching up" later in the month. Episode Guide ---------------- 8:38 - Quick Questions 36:18 - Patreon Song 41:33 - Eric's Take - State Of Gaming 2024 1:02:02 - Tea Time With Tim - Switch 2 1:18:30 - News 2:16:06 - Battle Of The Systems: Gradius 3 (SNES) vs. Thunderforce III (Genesis) News - (cody) https://www.timeextension.com/news/2023/12/mobile-fc-creation-kit-turns-the-nintendo-famicom-into-a-handheld (Eric) New Shooter Looks Like Cuphead Meets Doom - https://kotaku.com/fps-retro-cartoon-shooter-mouse-new-trailer-gameplay-1851093499 (Tim) - New Ultimate 64 / Ultimante 1541 Firmware – Major update with network / internet capability, Assembley 64 online game database instant game search, download and run the game. Control C64 from webinterface via browser upload and run games also REST API. https://ultimate64.com/Firmware (Cody) https://www.timeextension.com/news/2023/12/evercades-indie-heroes-collection-3-arrives-early-next-year (Eric) - World's First Flash Cart for Nintendo Switch Unveiled - https://isp.page/news/hackers-appear-to-have-created-a-working-switch-flash-cart/#gsc.tab=0 (Tim) - Thrust, the classic Gravitar style game famously released for the C64 with legendary music by Rob Hubbard a previously unlreased DOS version has now been found and has been made available by Frank Gasking over at Games That Weren't. https://www.gamesthatwerent.com/2023/09/thrust/ (Cody) https://www.timeextension.com/news/2023/12/amiga-mini-the-a500-gets-new-firmware-and-a-free-game (Eric) Castlevania SON on Saturn patched and updated - https://www.timeextension.com/news/2024/01/saturn-fans-can-now-play-the-ultimate-version-of-castlevania-symphony-of-the-night (Tim) Forest Saver - Save the Trees in this new Commodore 64 game by INTOINSIDE https://www.indieretronews.com/2023/12/forest-saver-save-trees-in-this-new.html (Cody) https://www.timeextension.com/news/2023/12/atari-owns-almost-half-of-polymega-maker-playmaji (Eric) - New Gamecube ODE drive – Flippydrive (solderless) - https://www.timeextension.com/news/2024/01/usd38-flippydrive-ode-lets-you-keep-your-gamecubes-optical-drive (Cody https://www.timeextension.com/news/2023/12/a-new-shenmue-fangame-dreams-of-saturn-has-released-just-in-time-for-christmas (Eric) Kid beats Tetris - https://www.popsci.com/technology/tetris-beat-kill-screen/ (Cody) https://www.timeextension.com/news/2023/12/check-out-these-sega-dreamcast-and-mega-drive-wireless-headphones (Eric) - Hyperkin's Portable Genesis / Mega Drive Docks Like A Nintendo Switch - https://www.timeextension.com/news/2024/01/hyperkins-portable-genesis-mega-drive-docks-like-a-nintendo-switch (Cody)- https://www.gamecrate.com/hidden-disc-booting-function-in-alien-resurrection-ps1-game-revealed-after-over-20-years (Eric) - Earthbound 30th Anniversary - https://gamerant.com/earthbound-30th-anniversary-nintendo-celebration-plans-what/ (Cody) https://www.timeextension.com/reviews/sinden-light-gun-old-school-arcade-blasting-on-your-modern-day-tv (Eric) a way to unlock an extended Bomberman SS prototype inside of an old Sega demo disc that was bundled along with some Sega Saturn consoles in Japan. - https://www.timeextension.com/news/2024/01/new-bomberman-patch-unlocks-hidden-prototype-in-saturn-demo (Cody) - https://retrododo.com/robocop-vs-predator-is-a-new-game-boy-style-fan-game/ (Eric) - New add-ons coming for Atari Gamestation Pro - https://youtu.be/YW5QVaKkUSQ?si=vYhT59PbeNmws9Pe (Cody) - https://www.timeextension.com/news/2024/01/paku-paku-is-pac-man-simplified-and-its-fantastic Please give us a review on Apple Podcasts! Thanks for listening! You can always reach us at podcast@pixelgaiden.com. Send us an email if we missed anything in the show notes you need. You can now support us on Patreon. Thank you to Henrik Ladefoged, Roy Fielding, Matthew Ackerman, Josh Malone, Daniel James, 10MARC, Eric Sandgren, Brian Arsenault, Retro Gamer Nation, Maciej Sosnowski, Paradroyd, RAM OK ROM OK, Mitsoyama, David Vincent, Ant Stiller, Mr. Toast, Jason Holland, Mark Scott, Vicky Lamburn, Mark Richardson, Scott Partelow, Paul Jacobson, Steve Rasmussen, Retro Gamer Nation, and Adam from Commodore Chronicles for making this show possible through their generous donation to the show. Support our sponsor Retro Rewind for all of your Commodore needs! Use our page at https://retrorewind.ca/pixelgaiden and our discount code PG10 for 10%
On this episode of the Scouting For Growth podcast, Sabine VdL talks to Terence Bennett, CEO of DreamFactory Software, brings with him a decade's worth of insights into cybersecurity, adversarial simulations, and API advancements. He's a veteran, both in serving his country and in protecting digital domains. Today, we're delving deep into Terence's illustrious career, the current cyber threat landscape, and the role of emerging technologies in shaping our digital future. KEY TAKEAWAYS If you spend enough time with computers I think you just figure out there's a whole other world in this application of computing and technology. I spent a lot of time in hacking forums online in the early 2000s, through that I got into early web development and learning the different ways to use the internet in ways it wasn't intended. 10 years later I came back around to cybersecurity as a professional. BEST MOMENTS‘RedTeaming is about simulating an adversary as they go about the process of trying to get into a network, computer, or facility.'‘As cybersecurity, as a field, has matured you'll find very specific signatures to different kinds of attackers in the way the actually go about doing that.'‘The challenges are, first and foremost, having enough people with the right skills to actually build APIs. Public facing APIs account for 14% of the total number of APIs that teams are buildings, and in some cases they account for 50-75% of a company's revenue.'‘Ransomware attacks are the 800lb gorilla in the room when it comes to threats… as we see organisations accelerating through digitisation/monetisation strategies, they're connecting more and more data and systems that used to be offline and you can't avoid that.' ABOUT THE GUESTTerence Bennett is the CEO of DreamFactory Software, a leader in API code automation based in Northern California. He has over 10 years of experience in cybersecurity, REST APIs, operations, and leadership. Previously, Bennett was the COO of Integrate.io and a member of Google's offensive security "RedTeam". At Google, he helped craft realistic cybersecurity exercises to test the company's defences. He has deep expertise in cybersecurity, penetration testing, and adversarial simulation. Bennett served in the U.S. Navy for over 8 years as a Naval Intelligence Officer and Surface Warfare Officer. He was deployed in support of Operation Iraqi Freedom and worked at the Naval Criminal Investigative Service (NCIS). He is a Certified Information Systems Security Professional (CISSP) and holds a Bachelor's degree in Economics from the U.S. Naval Academy and a Master's degree from the U.S. Naval War College. Bennett is passionate about giving back through public service. He serves on the Board of Directors for the American Red Cross - Northern California Coastal Region and the Board of Advisors for Shields & Stripes, assisting veterans with PTSD. As a leader in API automation, Bennett frequently speaks on how code automation is changing software development. He aims to simplify complex integration challenges and help companies go to market faster. https://www.linkedin.com/in/terencehbennett/ https://www.dreamfactory.com/ ABOUT THE HOSTSabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew, a venture lab that accelerates the curation, validation, and commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers and accelerating over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter: SabineVdLLinkedIn: Sabine VanderLindenInstagram: sabinevdLofficialFacebook: SabineVdLOfficialTikTok: sabinevdlofficialEmail: podcast@sabinevdl.comWebsite: www.sabinevdl.comThis show was brought to you by Progressive Media