POPULARITY
Categories
In 1263 BCE, priests announced the death of the APIS BULL. Sacred to Ptah, the bull dwelled in the temple at Men-nefer (Memphis). Now, in year 30 of Ramesses II, the King's son KHA-EM-WASET would lead the funerary processions. Shortly after, the prince inaugurated the first phase of a now famous monument. The Lesser Vaults of the SERAPEUM begin to take shape. The prince also starts a project for which he is renowned: the preservation and restoration of old monuments. These acts have earned him the moniker "the first Egyptologist." Logo: Statue of Khaemwaset from Asyut, now in the British Museum (Photo Dominic Perry). Music: Keith Zizza www.keithzizza.net, used with artist's permission. Learn more about your ad choices. Visit megaphone.fm/adchoices
Saurabh Shintre, Founder and CEO of Realm Labs, is on Defender Fridays today to discuss securing AI from within.Saurabh previously led the AI security research at Splunk and Symantec. He has been at the forefront of AI security research for nearly a decade with multiple publications and patents and regularly features on public forums on issues regarding security and AI. Saurabh holds a PhD from Carnegie Mellon. Learn more at https://www.realmlabs.ai/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.io/Follow LimaCharlieSign up for free: https://limacharlie.io/LinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie
In this week's Podcast: Nosema apis and Nosema Ceranae. Two, spore forming, parasitic Microsporidia! They sound like something of a horror show for our bees and the effects of a heavy infection can be quite damaging. Listen in as I explain what it is, how you can identify it, and ultimately deal with it, so your bees can have a healthy and productive Summer.Hi, I'm Stewart Spinks, welcome to Episode 382 of my podcast, Beekeeping Short and Sweet.Please support us throught affiliate links below, they cost you nothing and help us continue to produce our content.References: Glavinic U, Blagojevic J, Ristanic M, Stevanovic J, Lakic N, Mirilovic M, Stanimirovic Z. Use of Thymol in Nosema ceranae Control and Health Improvement of Infected Honey Bees. Insects. 2022 Jun 24;13(7):574. doi: 10.3390/insects13070574. PMID: 35886750; PMCID: PMC9319372.Hive Five Multi Guard EntrancesBeekeeping Courses at Thorne Beehvies in Wragby Lincolnshire 2026Some of my Favourite Microscopy Books:Pollen Loads of the Honeybee by Dorothy HodgesRex Sawyer's Pollen IdentificationPollen Grains and Honeydew by Margaret AdamsThe Pollen Landscape by Joss BartlettPollen Microscopy by Norman ChapmanThe National Bee Unit Varroa Information can be found HEREBee Aware Varroa Information can be found HEREThorne Beehives Bees on a Budget Hive The Beekeeper's Dictionary websiteEthyl Acetate for colony destructions can be found hereGardening Potting Tray for effective frame cleaningStainless Steel Stock Pots for use as a double boiler. Get one slightly larger than the other to fit inside.Gas Stove for outdoor use to render wax and old comb.Contact Me at The Norfolk Honey CompanyVMD Website: Click HEREJoin Our Beekeeping Community in the following ways:Early Release & Additional Video and Podcast Content - Access HereStewart's Beekeeping Basics Facebook Private Group - Click HereTwitter - @NorfolkHoneyCo - Check Out Our FeedInstagram - @norfolkhoneyco - View Our Great PhotographsSign Up for my email updates by visiting my website hereAmazon links are affiliate links. I recieve a small commission should you choose to purchase.Support the show
Podcast ONE: 6 de marzo de 2026 CoPaw (IA local sin nube), GPT‑5.4 con millón de tokens, la nueva MacBook Neo “económica”, la guerra Irán‑Israel amplificada por desinformación de IA y todo lo que dejó el #MWC2026. Escucha el nuevo episodio de #PodcastONE en One Digital. Escucha aquí el Podcast ONE: 6 de marzo de 2026 Facebook Live One Digital: CoPaw, GPT-5.4, MacBook Neo y el caos geopolítico de marzo 2026 En este episodio del viernes 6 de marzo de 2026, transmitido en vivo desde São Paulo (Brasil) y Ciudad de México, Vincent Quezada y Pablo Berruecos analizan una semana explosiva: herramientas de inteligencia artificial local (CoPaw), el lanzamiento de GPT‑5.4 con contexto de un millón de tokens, la MacBook Neo (la laptop Apple más económica de su historia), el conflicto geopolítico Irán‑Israel amplificado por desinformación de IA en redes sociales y el Mobile World Congress 2026, que redefinió privacidad, seguridad y conectividad móvil. Un episodio que resume el estado actual de la tecnología, la geopolítica y la ética digital en 2026. ¿Qué es CoPaw? Un agente de IA completamente local sin dependencias en la nube Vincent abre el episodio presentando CoPaw (Co‑Personal Agent Workstation), un agente de inteligencia artificial que funciona completamente en tu equipo local, sin procesar datos en servidores externos como ChatGPT o Gemini. La arquitectura es una evolución directa de los agentes COD (marco multiagente de Alibaba). La diferencia crítica: toda la información permanece dentro de tu máquina, lo que garantiza privacidad total y funcionamiento sin internet una vez instalado el proyecto. “CoPaw no es simplemente un cliente de chat para modelos locales. Es un orquestador de tareas que puede navegar por internet, leer PDFs, generar documentos Word, enviar mensajes por Telegram y ejecutar acciones programadas de forma automática sin intervención humana”. — Vincent Quezada Requisitos técnicos de CoPaw: hardware y software RAM mínima: 8 GB (16 GB ideales para multitarea). Almacenamiento: 10 GB mínimos (20 GB recomendados para modelos grandes). Software: Python 3.10, Node.js v18. GPU opcional pero recomendada: una tarjeta NVIDIA con CUDA acelera respuestas de 15‑40 segundos a 3‑8 segundos. Compatibilidad: Windows, macOS y Linux; la instalación automática gestiona todas las dependencias. Motor de modelos: Ollama (descargable desde ollama.com), disponible para Windows, macOS, Ubuntu y Debian. Modelos de lenguaje local según necesidad y RAM disponible La elección del modelo depende de tu hardware y de tu caso de uso. Vincent explica que el número al final del nombre (3B, 7B, 8B, 14B) representa los miles de millones de parámetros que maneja; a mayor número, mayor precisión, pero también más RAM requerida. Phi 3 Mini (4 GB RAM): respuestas cortas, equipos básicos, uso introductorio. Llama 2 8B (8 GB RAM): velocidad media (15‑40 segundos), ideal para redacción general, análisis de textos y resúmenes. Mistral 7B (8 GB RAM): especializado en escritura creativa y resúmenes de contenido largo. DeepSeek 8B (8 GB RAM): razonamiento lógico, análisis de código y debugging. Qwen 3 (14B) (16 GB RAM): tareas complejas y análisis extenso de datos; es lento sin GPU. “No uses un modelo de 20 gigabytes para una simple traducción. Es como manejar un camión de carga para ir a la tienda. Elige según tu tarea real”. — Vincent Quezada Módulos especializados que llevan CoPaw más allá del chat básico CoPaw incluye módulos independientes que se activan automáticamente según el contexto de tu tarea. Cada uno requiere cierta configuración específica. Browser Reissable: navegador web autónomo que busca información en tiempo real; requiere la instalación de Playwright. News Module: búsqueda y resumen automático de noticias; requiere una clave API de Tavily (gratuita con 1,000 búsquedas mensuales). File Reader: lee archivos locales (.txt, .csv, .json) sin configuración adicional. PDF Module: extrae, analiza y resume PDFs complejos. DOCX Module: crea y edita documentos Word de forma automática. XLSX Module: manipula hojas de cálculo y calcula promedios, máximos y mínimos de columnas. PPTX Module: genera presentaciones de PowerPoint de forma automática. Cron Jobs (automatización): programa tareas para ejecutarse en intervalos específicos (diarios, semanales, cada N horas) sin intervención del usuario. Email Manager (Himalaya): gestión automática de correos; Vincent lo recomienda solo para usuarios avanzados. Casos de uso prácticos según nivel de experiencia Principiante: “Busca las noticias más importantes de inteligencia artificial de hoy”. “Explica la diferencia entre aprendizaje autónomo y aprendizaje profundo con ejemplos prácticos”. “Redacta un correo formal para solicitar una reunión con un cliente importante”. Intermedio: “Lee el archivo C:UsuariosDocumentosreporte.pdf y genera un resumen ejecutivo de máximo 500 palabras”. “Abre ventas_2025.xlsx, identifica los tres meses con mayor crecimiento entre enero y marzo y muestra los porcentajes”. “Navega a Amazon.com.mx, busca auriculares inalámbricos menores a 1,500 pesos y lista las cinco mejores opciones con precio y enlace”. Avanzado: “Busca las cinco noticias tecnológicas más importantes de hoy, redacta un párrafo de 150 palabras para cada una y guarda el resultado en noticiashoy.docx”. “Lee todos los archivos .csv de C:datos, combínalos en uno solo y calcula el promedio, máximo y mínimo de cada columna numérica”. “Navega a LinkedIn, busca vacantes de redactor de contenido publicadas esta semana en Ciudad de México, extrae títulos, empresas, enlaces y guarda todo en empleos.xlsx”. Automatización con tareas programadas: el verdadero diferenciador de CoPaw La función más poderosa es la capacidad de programar ejecuciones automáticas sin que el usuario esté presente. Esto convierte a CoPaw de una simple herramienta de chat en un asistente de productividad genuino. Resumen diario de noticias: “Configura una tarea que se ejecute todos los días a las 8:00 a. m.: busca las principales noticias de tecnología e IA y guarda el resultado en noticiasdiarias.txt”. Monitoreo de precio de criptomonedas: “Crea una tarea cada seis horas: registra la cotización actual de Bitcoin con fecha y hora en precio.txt”. Reporte semanal consolidado: “Programa una tarea cada lunes a las 9:00 a. m.: lee todos los archivos .txt de C:reportes, genera un resumen ejecutivo y guarda el documento como reportesemanal.docx”. Limpieza automática de archivos: “Configura una tarea cada viernes a las 11:00 p. m.: mueve todos los archivos .log con más de 30 días de antigüedad a la carpeta archivos_antiguos”. Estas variables (frecuencia, horarios, tiempos de latido o heartbeat) se controlan en el archivo config.json. Vincent subraya la importancia de probar con cuidado antes de automatizar procesos críticos. ¿CoPaw requiere internet? Solución de errores comunes CoPaw funciona completamente sin conexión una vez instalado con su modelo descargado. Solo requiere internet para búsquedas web mediante Tavily y si configuras APIs externas (OpenAI, Anthropic). Los errores más frecuentes que Vincent encontró durante sus pruebas son: “No es posible conectar con servidor CoPaw”: verifica que ejecutaste copaw start y que el puerto 8088 está disponible. “Comando copaw no reconocido”: el directorio de ejecución no está en el PATH del sistema; asigna la ruta manualmente o usa el script completo. “Ollama no disponible”: la dirección debe ser exactamente localhost:11434 sin sufijos; revisa el archivo de configuración. CoPaw vs. OpenCloud: ¿cuál es mejor? “CoPaw fue más útil que OpenCloud en mis pruebas. Mientras OpenCloud es muy potente, CoPaw ofrece instalación más rápida, una interfaz más accesible y documentación más clara. Ambas son de código abierto bajo licencia Apache 2.0. CoPaw es completamente gratis; solo la clave de Tavily tiene un costo opcional (unos 10 dólares mensuales)”. — Vincent Quezada MacBook Neo: la primera laptop Apple verdaderamente económica (599 dólares) Apple lanzó la MacBook Neo, un quiebre histórico en su estrategia de precios. Por primera vez en la historia de Macintosh existe una laptop Apple genuinamente accesible: 599 dólares (499 dólares para educación). Dirigida a estudiantes y nuevos usuarios, representa un cambio radical en la democratización del ecosistema Apple. Especificaciones técnicas de la MacBook Neo Procesador: chip A18 Pro; seis núcleos (dos de rendimiento y cuatro de eficiencia); GPU de cinco núcleos; Neural Engine de seis núcleos para tareas de inteligencia artificial. Rendimiento en IA: hasta tres veces más rápido en cargas de trabajo de inteligencia artificial que la competencia; acceso completo a Apple Intelligence manteniendo la privacidad de los datos. Pantalla Liquid Retina: 13 pulgadas, 2,408 × 1,506 píxeles, 510 nits de brillo, soporte para mil millones de colores; una de las pantallas más brillantes en su rango de precio. Batería: 36,5 Wh, hasta 16 horas de autonomía en uso mixto; dos puertos USB‑C para carga rápida. Diseño y construcción: carcasa de aluminio resistente, peso de solo 1,23 kg; colores disponibles: Blush, Indigo, Plata y Eléctrico. Conectividad: Wi‑Fi 6E, Bluetooth 6, entrada de audio de 3,5 mm (rara hoy en día), cámara FaceTime HD 1080p, micrófono dual y audio espacial Dolby Atmos. Almacenamiento: 256 GB base (Vincent cuestiona esta especificación a ese precio, pues alternativas con Windows ofrecen 512 GB por menos dinero). Software: macOS preinstalado con integración completa de Apple Intelligence. Disponibilidad: envíos a partir del 11 de marzo de 2026. “La pantalla es realmente excepcional. Es una de las mejores que he visto comparada con iPads y monitores tradicionales. Solo por ese aspecto la MacBook Neo se justifica”. — Vincent Quezada ¿Para quién es la MacBook Neo? Estudiantes: necesitan un equipo potente, ligero y con batería para todo el día; el precio educativo (499 dólares) es especialmente atractivo. Nuevos usuarios de Mac: quienes buscan una introducción asequible al ecosistema Apple sin gastar más de 1,200 dólares. Profesionales de tareas cotidianas: navegación web, edición de documentos, videollamadas y productividad básica. Usuarios preocupados por la sostenibilidad: está fabricada con un 60% de material reciclado. Vincent lanza una advertencia: el almacenamiento base de 256 GB a 599 dólares es cuestionable, ya que por ese mismo precio se encuentran laptops Windows con 512 GB que ofrecen mejor valor a corto plazo. Sin embargo, el diseño, la pantalla y la autonomía de la MacBook Neo compiten favorablemente. GPT‑5.4 de OpenAI: millón de tokens, automatización y 33% menos errores OpenAI lanzó GPT‑5.4 el 5 de marzo de 2026, apenas un día antes de este episodio. Durante la conversación, ChatGPT (participando en diálogo con Vincent) explicó las novedades clave que marcan diferencia en el mercado: contexto de hasta un millón de tokens, mejora del 33% en reducción de errores respecto a la versión previa, herramientas de automatización más profundas y mayor integración con flujos de trabajo profesionales. (Los detalles técnicos completos se abordan con más calma en el programa, pero el foco del episodio está en el impacto práctico y geopolítico.) Irán ataca infraestructura crítica: desinformación de IA amplifica el caos geopolítico A mitad del episodio, la conversación gira hacia el conflicto que explota sobre el planeta: Irán lanzó ataques contra bases militares estadounidenses, centros de datos (incluyendo instalaciones de Microsoft Azure en el Golfo Pérsico) y sistemas de desalinización en Oriente Medio. Vincent y Pablo enmarcan este escalamiento dentro de una historia más amplia: Estados Unidos, en apenas 250 años de existencia, ha estado en paz solo 16 años; el resto ha sido conflicto bélico constante. Irán, durante cuatro décadas, ha acumulado una capacidad defensiva nacional inmensa. Cuando se lanzan misiles de un millón de dólares para destruir drones de 20,000 dólares, la economía de la guerra revela su irracionalidad inherente. “Estamos viendo una operación quirúrgica de un país que lleva décadas preparándose para un momento así. No es improvisado; es cálculo estratégico. El problema es que genera nacionalismo extremo, no revolución interna”. — Vincent Quezada ¿Cuántos países están realmente involucrados? Expansión del conflicto más allá de Irán e Israel Lo que inicialmente parecía ser un conflicto bilateral Irán‑Israel se ha expandido a entre 16 y 17 países. No se trata solo de ataques entre naciones, sino también de: Ataques a bases militares de Estados Unidos en múltiples naciones del Golfo Pérsico. Infraestructura civil crítica comprometida, como plantas desalinizadoras que suministran agua a millones de personas. Centros de datos de Microsoft Azure, que gestionan sistemas de la OTAN, la defensa estadounidense y grandes instituciones financieras. Sistemas GPS degradados o bloqueados en las zonas del conflicto. Pablo subraya que una planta desalinizadora comprometida en el Golfo Pérsico afecta a millones de civiles. No se trata solo de un conflicto militar, sino de un ataque sistémico a la supervivencia civil. “La estrategia inicial que leí era que, después de matar al líder, habría revolución interna y cambio de gobierno. No funciona así. No puedes cambiar 40 años de dominación, creencia popular y cultura con un bombardeo. Generó nacionalismo extremo, justo lo contrario”. — Pablo Berruecos Gasto económico diario: más de mil millones de dólares en conflicto activo La cifra de gasto militar diario es casi incomprensible. Según el monitoreo de cuentas en X (Twitter) que rastrean gasto militar en tiempo real, el conflicto cuesta más de mil millones de dólares al día. Comparado con las pérdidas bursátiles simultáneas en Estados Unidos (Nvidia ‑1,55%, Google en rojo, Apple ‑1,42%, Visa ‑0,69%, Amazon ‑0,48%, Tesla ‑2,33%), el costo económico global es catastrófico. Desglose de los primeros días de ataques Día 1 (primer ataque de Irán): 500 misiles lanzados hacia Israel y bases estadounidenses. Día 2: 200 misiles. Día 3: 100 misiles. Día 4: 50 misiles. Día 5 y posteriores: 15‑20 misiles, pero con intensificación del uso de drones y sistemas más sofisticados. En cuanto a municiones, para interceptar cada misil lanzado Estados Unidos empleó entre 10 y 20 misiles Tomahawk, cuyo coste ronda los 4‑5 millones de dólares cada uno. La matemática es devastadora: para defenderse de 500 misiles, se gastaron entre 5,000 y 10,000 millones de dólares solo en defensa. Irán, con un presupuesto militar inferior, amplifica su impacto usando drones de bajo coste que replican la capacidad de misiles mucho más caros. ¿Por qué Dubái está en pánico? Crisis de confianza en los paraísos fiscales Pablo narra una anécdota inquietante: una influencer española se mudó a Dubái explícitamente para no pagar impuestos. Cuando comenzó el bombardeo, pidió al gobierno español que la rescatara. Las redes sociales reaccionaron con dureza: “Te fuiste para evitar impuestos, pero esperas que nuestros impuestos te salven”. Más allá del drama mediático, esto revela una crisis de confianza más profunda. Dubái representa la opulencia extrema (albercas en cada piso, derroche de dinero). Al mismo tiempo es una ciudad vulnerable: construida en medio del desierto sin recursos naturales, depende de agua desalinizada y petróleo importado. Una planta desalinizadora comprometida deja a millones de personas sin acceso a agua potable. Las embajadas no pueden evacuar a todos; la capacidad del aeropuerto es limitada. Los depósitos de oro de países del Golfo plantean preguntas: ¿quién los controla si hay invasión? ¿Se pierde la credibilidad de esa moneda? “Dubái te da una ilusión de seguridad. Luego descubres que estás tan vulnerable como en cualquier otro sitio. Si pierdes acceso a agua, dinero y energía, la opulencia desaparece en cuestión de horas”. — Pablo Berruecos ¿Es una tercera guerra mundial? La respuesta compleja de Vincent y Pablo La gran pregunta: ¿es esto la tercera guerra mundial? Vincent y Pablo responden que no, pero sí se trata de un conflicto multinacional sin precedentes recientes. Factores que empujan hacia un conflicto total: múltiples frentes (tecnológico, energético, cibernético), riesgo de escalamiento incalculable y poder nuclear en equilibrio inestable. Factores limitantes: China no quiere involucrarse (si lo hace, el “game over” planetario); Rusia comenta desde la banda; la diplomacia existe, pero parece ficción. Realidad actual: es una guerra sin declaración formal, sin límites claros y sin un final visible. Es un conflicto mayor que podría convertirse en guerra mundial si alguien toma la decisión equivocada. Censura en redes sociales: TikTok, Grok y ChatGPT eliminan realidad selectivamente Vincent lanza una acusación central: las plataformas de redes sociales están censurando el conflicto real mientras amplifican la desinformación generada con IA. Se forma así un mecanismo de control dual. Censura selectiva. TikTok, Grok y ChatGPT han censurado términos como “Palestina libre”, bloquean videos de ataques verificables y silencian reportajes de bombardeos reales. El resultado es que los usuarios no ven la magnitud real del conflicto. Amplificación de desinformación. Al mismo tiempo, videos falsos generados con IA se replican masivamente. Un ejemplo documentado es un video de un misil impactando un portaaviones, con barcos salvavidas saliendo disparados de forma físicamente imposible. Medios internacionales lo replicaron como si fuera un evento real. “Mucha gente salió de ChatGPT esta semana no por problemas técnicos, sino porque OpenAI dijo ‘sí' a participar en la guerra cuando Anthropic dijo ‘no'. Unos 1,5 millones de usuarios migraron por cuestiones éticas”. — Vincent Quezada El parque “Policía” de Teherán: cómo la IA comete atrocidades sin intención Un detalle sintetiza la tragedia: en Teherán existe un parque público llamado Parque Policía. Sistemas de IA estadounidenses lo detectaron como “base militar de policía” y lo bombardearon. No había policías, solo civiles. Se destruyó infraestructura pública sin valor militar. Esto ilustra una crisis existencial: si los sistemas de IA se usan para identificar blancos y esos sistemas cometen errores de clasificación, ¿quién es responsable? La respuesta legal suele ser que nadie, porque “fue una máquina”. El patrón se repite: Hospitales destruidos. Escuelas destruidas. Iglesias destruidas. Cada error (Con o sin intención) se traduce en más víctimas civiles. ¿Qué porcentaje de lo que ves es real y qué parte es generado por IA? Esta es la pregunta que obsesiona a Pablo al final de la sección. En redes sociales, el feed está contaminado: videos viejos del año pasado, videos recientes manipulados con IA, análisis en tiempo real legítimos, campañas de desinformación coordinada y censura selectiva, todo mezclado. Pablo cita un reportaje de un canal europeo (disponible vía Roku) que analizaba la cantidad masiva de videos falsos que circulan. La conclusión es aterradora: no sabes en qué creer. “Entre no ver nada (porque está censurado) y ver todo falso (porque es IA), terminas paralizado. La verdad deja de importar cuando ya no sabes identificarla”. — Pablo Berruecos Impacto tecnológico real: Microsoft Azure y la columna vertebral digital del conflicto Un detalle merece su propio análisis: Irán atacó centros de datos de Microsoft en el Golfo Pérsico. No se trata de servicios comerciales como AWS, sino de infraestructura Azure que soporta: La columna vertebral operativa de la OTAN. El Departamento de Defensa de Estados Unidos. Grandes instituciones financieras occidentales. Infraestructura militar 5G. Zonas de disponibilidad Azure con clasificación FedRAMP High, la más alta que puede obtener un proveedor comercial. Si estos centros de datos llegaran a caer (algo aún no confirmado oficialmente), el impacto sería catastrófico para la estructura de defensa y las finanzas occidentales. Pablo subraya que esto no es un ataque comercial, sino un ataque al tejido conectivo digital que une la arquitectura de defensa con las ambiciones soberanas de IA en el Golfo Pérsico. Conclusión parcial. El conflicto Irán‑EU – Israel ya no es solo militar; es digital, económico y tecnológico. La desinformación generada con IA amplifica el caos mientras la censura selectiva paraliza la comprensión pública. El resultado es un planeta sin ley en el que la verdad es tan escasa como la paz. Mobile World Congress 2026: privacidad, seguridad y conectividad satelital Tras el análisis geopolítico, Vincent y Pablo redirigen la conversación hacia el Mobile World Congress 2026 en Barcelona, el evento más importante de la industria móvil global. Este año marca un punto de inflexión: privacidad y seguridad dejan de ser características opcionales para convertirse en pilares competitivos. Motorola abandona el Android tradicional por GrapheneOS; múltiples fabricantes lanzan teléfonos con Linux exclusivos para Europa; MediaTek integra conectividad satelital 5G; Nothing presenta el Phone 4 con diseño transparente Glyph Matrix. Pablo y Vincent diseccionan cada lanzamiento con detalle técnico. Nothing Phone 4: diseño Glyph Matrix transparente Nothing lanzó el Phone 4 con una propuesta radical: mantener el diseño transparente icónico y añadir Glyph Matrix, una matriz de 137,000 mini‑LEDs que cubren el 57% de la parte trasera del dispositivo y que brillan un 100% más que en generaciones anteriores. Estos LEDs generan iconos personalizables (batería, temporizador, reloj digital, espejo Glyph, camino solar) que transforman la cámara trasera en una interfaz háptica y visual única. Especificaciones técnicas del Nothing Phone 4 Diseño Glyph Lift Matrix: fusión de un cuerpo unibody de metal con refracciones de luz, acabados suaves sin fisuras y un diseño retrofuturista inspirado en cámaras de cine vintage y consolas clásicas. Colores: plata, negro y rosa metálico (poco común en 2026 y distintivo a simple vista). Cámara trasera principal: sensor Sony Exmor 700c de gran tamaño, 50 megapíxeles, zoom óptico 3,5x. Cámara gran angular: sensor Sony de 32 megapíxeles para captura de contexto amplio. Motor Lens Engine 4: compatible con fotos y video 4K Ultra HDR, efectos HDR Flex y Dolby Vision integrado. Pantalla AMOLED de 6,83 pulgadas: resolución 1,5K (2,408 × 1,506 píxeles), 450 ppp, tasa de refresco de 144 Hz (ideal para videojuegos) y brillo máximo de 5,000 nits. Protección: cristal Corning Gorilla Glass 7i con resistencia mejorada a caídas y rasguños. Procesador: Snapdragon 7 Serie Gen 4; CPU un 27% más rápida y GPU un 30% más potente que la generación anterior; capacidades de IA un 65% superiores. Memoria y almacenamiento: RAM LPDDR5X y almacenamiento UFS 3.1, con velocidades de lectura y escritura elevadas. Batería: 5,080 mAh, carga rápida de 50 W y más de 17 horas documentadas de uso mixto. Software: Nothing OS 4.1 basado en Android 16, con AI Dashboard para control de funciones de IA, Essential AI para organización de calendario y vida diaria, Essential Search (acceso multiplataforma inmediato), Essential Memory (personalización según actividad), Playground (creación de apps sin código) y Essential Space (sincronización en la nube multiplataforma). Precio y disponibilidad: la revelación oficial se programa para el 18 de marzo de 2026. Vincent confirma invitación al evento, pero con conflicto de agenda; espera recibir unidades de prueba. “El diseño transparente de Nothing no es solo estética; es filosofía. Muestran lo que todas las demás marcas ocultan. Es una declaración sobre privacidad y accesibilidad”. — Vincent Quezada Pruebas de cámara con el Honor Magic 8 Lite Vincent comparte sus pruebas de cámara con el Honor Magic 8 Lite realizadas durante un fin de semana en Chapultepec (Ciudad de México). Sus conclusiones son claras: la fotografía es excelente, el video es aceptable pero presenta limitaciones de estabilización al usar el zoom máximo. La batería del Honor duró desde el domingo hasta el viernes con un 82% restante al momento de grabar, algo que Vincent califica de “maravilla” frente a la competencia. La carga rápida también impresiona: del 15% al 80% en menos de 30 minutos. MediaTek M90: primer chip 5G con conectividad satelital integrada MediaTek presentó el M90, el primer chip móvil 5G con conectividad satelital integrada de fábrica. Esto permite que los dispositivos accedan a redes como Starlink Mobile incluso sin infraestructura celular terrestre. En contextos críticos —terremotos, conflictos armados, zonas rurales remotas—, esta conectividad híbrida 5G‑satelital es infraestructura de supervivencia, no un lujo tecnológico. ¿Por qué la conectividad satelital es crítica? Vincent comparte evidencia directa: durante simulacros de alerta sísmica y terremotos reales de 2026 en México, solo dos de sus cuatro teléfonos recibieron la alerta de emergencia. Los que tenían Wi‑Fi permanente activo y chips compatibles con conectividad satelital sí captaron la señal; los otros, no. La conclusión es inequívoca: la redundancia de conectividad puede literalmente salvar vidas. Casos de uso estratégicos: comunicaciones militares sin depender de operadores civiles comprometidos, navegación precisa en regiones sin torres celulares, transmisión de datos en vehículos autónomos en autopistas remotas y alertas de emergencia en zonas sísmicas o bajo ataque. Implicación geopolítica: gobiernos y fuerzas de seguridad pueden operar de forma independiente a los monopolios de conectividad nacional y los ciudadanos en zonas de conflicto pueden comunicarse sin censura de proveedores locales. Velocidad: no es la más alta (la latencia es mayor que la del 5G terrestre), pero garantiza conectividad donde no hay alternativas viables. “La conectividad satelital no es un lujo; es infraestructura crítica de supervivencia. Si no recibiste la alerta sísmica porque tu teléfono no tenía redundancia, la tecnología fracasó”. — Vincent Quezada Motorola abandona Android tradicional: apuesta por GrapheneOS Motorola anunció oficialmente el fin de su línea de dispositivos con Android estándar y su migración hacia GrapheneOS, un sistema operativo de código cerrado pero obsesionado con la privacidad. GrapheneOS implementa un aislamiento extremo a nivel granular: una aplicación de mensajería no puede acceder a micrófono, cámara o ubicación a menos que el usuario lo autorice explícitamente en cada sesión. Esta decisión responde a una demanda corporativa creciente de teléfonos resistentes a la vigilancia masiva, a ciberataques y a la exfiltración de datos. El mercado objetivo son empresas multinacionales, gobiernos, periodistas en contextos de riesgo y usuarios muy conscientes de la privacidad. Ventajas de GrapheneOS: aislamiento estricto por aplicación, permisos granulares que expiran por sesión, resistencia a puertas traseras corporativas o gubernamentales y actualizaciones de seguridad más rápidas que en Android AOSP. Desventajas: fragmentación de aplicaciones, compatibilidad limitada con Google Play Services, ecosistema menos maduro y curva de aprendizaje más pronunciada para usuarios no técnicos. Precio estimado: no se ha revelado oficialmente, pero se espera un sobreprecio de entre el 15% y el 20% respecto a modelos Android estándar. “Android abierto es poderoso pero vulnerable. GrapheneOS es Android cerrado, paranoico y centrado en la privacidad. La elección depende de si valoras más la conveniencia o el control absoluto de tus datos”. — Pablo Berruecos Teléfonos con Linux: código abierto verificable y seguridad auditada Varios fabricantes presentaron prototipos de teléfonos basados completamente en Linux, con lanzamiento inicial exclusivo en Europa. Linux ofrece transparencia total de código fuente, auditoría comunitaria constante y resistencia natural a puertas traseras corporativas o gubernamentales. Aunque el mercado se limita, de momento, a Europa por las estrictas regulaciones del RGPD, las proyecciones apuntan a una expansión global alrededor de 2027. Ventaja clave: código abierto 100% verificable, auditoría de seguridad comunitaria permanente, ausencia de telemetría corporativa oculta y actualizaciones controladas por el usuario. Desafío principal: enorme fragmentación de aplicaciones, compatibilidad casi nula con Google Play Store, ecosistema de apps menos maduro e interfaces menos pulidas que Android o iOS. Público objetivo: gobiernos europeos con requisitos de soberanía digital, periodistas de investigación, disidentes políticos y profesionales de sectores de seguridad crítica (finanzas, defensa, salud). Otros lanzamientos destacados del Mobile World Congress 2026 Smartphones con innovación radical en diseño y modularidad Honor Robot Phone: cámara de 200 megapíxeles montada en un brazo gimbal motorizado que se despliega desde el chasis, permitiendo ángulos de captura profesionales imposibles en teléfonos convencionales (autorretratos sin distorsión, videografía con estabilización tipo cine, panorámicas sin cortes digitales). Motorola Razr y Edge (FIFA World Cup 26 Collection): ediciones especiales con logotipo oficial del torneo, interfaz personalizada del evento y colores temáticos. Xiaomi 17 Ultra: presentación europea con especificaciones de gama alta, precio por anunciar pero competitivo frente al Samsung Galaxy S26 Ultra. Nothing Phone 4A: versión más accesible del Phone 4 con colores llamativos (destaca el rosa metálico) y un Glyph Matrix reducido pero funcional. Unihertz Titan Elite 2: teclado físico completo (nostalgia BlackBerry) en un formato moderno con Android 16. Vivo X300 Ultra: cámara de 200 megapíxeles y lanzamiento global fuera de China, la primera vez que Vivo lleva un buque insignia de este tipo a mercados occidentales. Tecno Atom (modular magnético): sistema de accesorios magnéticos intercambiables inspirado en los antiguos Moto Mods (proyectores, cámaras adicionales, baterías extendidas) sin sacrificar portabilidad diaria. Tecno Power Neon: incorpora iluminación neón real usando tecnología de gas inerte de baja tensión; diseño retrofuturista cyberpunk; primer teléfono con neón físico desde 2003. Legion Gold Fold (concepto): teléfono plegable centrado en videojuegos, con pantalla de 240 Hz y gatillos ultrasónicos integrados. Laptops y tablets con pantallas modulares e IA integrada Lenovo ThinkBook módulo IPC: puertos intercambiables magnéticos para conectar una segunda pantalla portátil; extensión dinámica del espacio de trabajo sin cables. Lenovo Yoga Book Pro D: doble pantalla con visualización 3D sin necesidad de gafas de realidad virtual, productividad multitarea reforzada y reconocimiento de gestos en el aire. Asus VivoBook Pad XPS: tablet estilo laptop con pantalla OLED más grande (15,6 pulgadas) y teclado mecánico desmontable mejorado. Chips y conectividad avanzada: preparación para 6G Qualcomm FastConnect 8800: módulo Wi‑Fi 7 con IA integrada para optimizar el ancho de banda automáticamente según el tipo de contenido. Qualcomm X105 5G: módem un 15% más rápido, un 20% más pequeño y un 30% más eficiente que el X100, pensado como puente hacia 5G Advanced (5G‑A). Snapdragon Wear Elite: chip orientado a wearables y robótica, con procesamiento de baja latencia (por debajo de 10 ms), ideal para relojes inteligentes, audífonos con IA y robots de servicio. Samsung y la pantalla anti‑espionaje Samsung presentó una tecnología de pantalla que impide que las personas situadas a los lados del usuario vean el contenido. La innovación cambia la forma en que los píxeles emiten luz: se coloca un “aro óptico” alrededor de cada píxel que nubla la imagen cuando se observa desde ángulos laterales. Desde el frente, la imagen es perfectamente clara; desde cualquier otro ángulo, se ve borrosa e ilegible. “Esto resuelve el problema de privacidad en transporte público, oficinas compartidas y aeropuertos. Finalmente puedes trabajar con información sensible sin preocuparte de quién mira por encima de tu hombro”. — Pablo Berruecos Conclusión parcial. El Mobile World Congress 2026 consolidó privacidad, seguridad y conectividad satelital como pilares no negociables de la telefonía móvil. Nothing Phone 4 democratiza el diseño transparente; MediaTek integra satelital en chips 5G; Motorola apuesta por GrapheneOS; Europa lidera con teléfonos Linux. La pregunta ya no es “qué tan rápido es tu teléfono”, sino “qué tan privado y resiliente es”. Robots humanoides y audífonos inteligentes: la IA se vuelve física El Mobile World Congress 2026 no giró solo en torno a teléfonos. La inteligencia artificial se materializó en hardware físico: robots humanoides capaces de bailar moonwalk, audífonos que analizan la geometría del canal auditivo para prevenir pérdida de audición, dispositivos para mascotas con llamadas bidireccionales mediante gestos y gafas de realidad extendida con traducción en tiempo real. Vincent y Pablo exploran estas innovaciones con mirada crítica. Honor Robot Humanoid: bípedo capaz de bailar y servir Honor presentó un robot humanoide bípedo completamente funcional, capaz de bailar (incluyendo un moonwalk que se volvió viral), mantener el equilibrio en superficies irregulares y ejecutar tareas de servicio básicas. Pablo recuerda un momento particularmente comentado: un robot humanoide propinando un “golpe bajo” a un boxeador durante una demostración, probablemente por un error de calibración, que generó memes instantáneos. Capacidades motoras: caminar de forma estable, correr a baja velocidad, subir escaleras y bailar coreografías preprogramadas. Casos de uso previstos: servicio hotelero, asistencia en hospitales, limpieza industrial y entretenimiento en eventos. Limitaciones actuales: velocidad de procesamiento de IA para decisiones complejas, autonomía de batería de entre cuatro y seis horas en operación continua y costo prohibitivo para el consumidor final (por encima de 50,000 dólares). PetFoam: comunicación bidireccional para mascotas PetFoam es un dispositivo que permite a las mascotas “llamar” a sus dueños mediante gestos reconocidos por IA. Por ejemplo, un perro que rasca un sensor específico puede activar una videollamada al dueño. Este, a su vez, puede responder con voz, mientras la mascota ve la imagen en una pequeña pantalla integrada. El caso de uso central es claro: mascotas en una posible emergencia (heridas, atrapadas) pueden alertar sin que haya intervención directa de otra persona. Google Iris XR: gafas de realidad extendida con traducción simultánea Google presentó el prototipo Iris XR, unas gafas de realidad extendida —no realidad virtual completa— con traducción en tiempo real integrada mediante IA. Sus casos de uso incluyen viajes internacionales, reuniones multilingües y accesibilidad para personas sordas (con subtítulos en tiempo real de las conversaciones). De momento no tienen fecha de lanzamiento comercial y solo están disponibles en demos controladas del MWC. Audífonos inteligentes que analizan tu oído: riesgos y beneficios Los audífonos evolucionan de meros accesorios pasivos a dispositivos de bioacústica avanzada. En el MWC 2026 se mostraron modelos capaces de analizar la geometría única del canal auditivo del usuario para ajustar de forma dinámica la cancelación de ruido, la ecualización personalizada y la exposición a decibeles. Esto crea un perfil acústico único por oído, minimizando la fatiga auditiva acumulativa y el riesgo de pérdida de audición permanente. Características técnicas de estos audífonos Cancelación de ruido adaptativa: detecta frecuencias específicas del entorno (motor de autobús, viento, multitudes, maquinaria industrial) y las atenúa selectivamente sin aislar por completo. Medición de decibeles en tiempo real: emite alertas visuales o hápticas si el volumen excede los 85 dB durante más de 30 minutos, siguiendo el límite seguro sugerido por la OMS. Análisis de la forma del oído: ajusta la presión en el canal auditivo y modifica el ancho de banda según la morfología individual, reduciendo la fatiga en usos prolongados de más de ocho horas diarias. Ecualización personalizada: compensa las deficiencias auditivas naturales de cada usuario en determinadas frecuencias. Riesgos para la salud auditiva: la presión en el tubo de Eustaquio Vincent advierte sobre un riesgo poco mencionado por los fabricantes: la cancelación de ruido total crea un sello hermético que genera presión en el canal auditivo. Esta presión activa el tubo de Eustaquio, responsable de regular la presión en el oído medio. El uso prolongado con sellado hermético puede: Comprometer la capacidad natural del oído para regular la presión (similar a lo que ocurre en un avión). Crear dependencia de una presión artificial para “escuchar correctamente”. Generar fatiga auditiva acumulativa por exceso de vibraciones internas. Aumentar el riesgo de infecciones de oído medio por retención de humedad. “La cancelación de ruido total te aísla del mundo. Una cancelación inteligente te mantiene conectado a tu entorno mientras disfrutas la música. La diferencia es literal entre la vida y un accidente”. — Vincent Quezada Caso práctico en Chapultepec: ceguera auditiva y casi choque Pablo cuenta una experiencia personal: caminaba en Chapultepec, en Ciudad de México, con audífonos con cancelación activa total. No escuchó a una persona que le gritaba para evitar un choque. Cuando finalmente la vio, ya era tarde y terminaron chocando. Reflexiona que, si hubiera estado en bicicleta y no escuchara la campanilla del trenecito turístico —que avisa su paso—, podría haber frenado de golpe y causar un accidente. Su recomendación es clara: nunca uses cancelación de ruido total en espacios públicos como calles, ciclovías o transporte. Actívala solo en entornos controlados y seguros (oficina, casa, avión). Mantén siempre un nivel medio de cancelación que permita escuchar alertas críticas del entorno (claxon, sirenas, gritos de advertencia). “Tengan cuidado. Si vas en el camión o en transporte público y te toca sentarte atrás del motor, el ruido se vuelve insoportable. Los filtros te dejan solo con la música y con el entorno realmente importante. Pero si te aíslas por completo, no sabes si alguien te está alertando de un peligro real”. — Pablo Berruecos Alianzas estratégicas hacia 6G: Nokia, NTT, Vodafone y más El MWC 2026 no solo presentó dispositivos, sino alianzas estratégicas que definen la ruta hacia un 6G nativo en inteligencia artificial. Nokia, NVIDIA, NTT, NTT Docomo, Vodafone, BT, Elisa y otros operadores anunciaron colaboraciones para adoptar tecnologías AI‑RAN (inteligencia artificial en redes de acceso radio) que mejoran el rendimiento de la red y soportan el crecimiento exponencial de la IA móvil. ¿Qué es 6G y cuándo llegará? Vincent y Pablo aclaran una confusión común: 5G Advanced (5G‑A) no es una nueva generación, sino un refinamiento del 5G existente con más velocidad, menor latencia y mejor eficiencia energética. El verdadero salto generacional será 6G, proyectado para 2030‑2032 según el consenso de los operadores presentes en el MWC. Características esperadas de 6G: velocidades teóricas 100 veces más rápidas que 5G (hasta 1 Tbps), latencias de menos de 0,1 ms (frente a 1 ms en 5G), conectividad híbrida 5G‑satelital como estándar, orquestación de IA de forma nativa en la red y uso de fotónica óptica para reducir el consumo energético. Infraestructura necesaria: inversión estimada de 100,000 millones de euros a nivel global, renovación completa de torres celulares e integración de computación cuántica en los núcleos de red. Casos de uso diferenciales: vehículos autónomos de nivel 5 (sin intervención humana), cirugías remotas en tiempo real con robótica, realidad extendida persistente (un metaverso funcional) y ciudades inteligentes con millones de sensores de IoT sincronizados. “6G no será mejor solo por ser 6G. Será mejor porque será inteligente, consciente del contexto y capaz de auto‑optimizarse en tiempo real sin intervención humana”. — Vincent Quezada Financiamiento y fotónica óptica: la apuesta de NTT Group AWS anunció la expansión de su infraestructura en mercados emergentes (India, Indonesia, Nigeria). Vodafone, la GSMA y otros organismos de telecomunicaciones aseguraron financiamiento de hasta 100 millones de euros específicamente para el desarrollo de estándares 6G con IA integrada desde el diseño. Esta inversión señala un cambio: actores privados financian estándares que antes estaban bajo control casi exclusivo de gobiernos. Por su parte, NTT Group (Japón) presentó sus avances en fotónica óptica y redes ópticas inalámbricas (ION: Innovative Optical and Wireless Network). El objetivo es reducir el consumo energético de los centros de datos, disparado por el uso intensivo de inteligencia artificial. Entre los proyectos destacados se encuentran: Convergencia fotónico‑electrónica: mejora la eficiencia energética de los centros de datos hasta un 60% respecto a la electrónica tradicional. Computación cuántica óptica: cálculos a gran escala con menor espacio físico, más velocidad y menores costes a largo plazo. Infraestructura resiliente con IA: redes autorreparables que detectan y resuelven fallos sin intervención humana. Ya no se trata solo de lanzar productos, sino de redefinir cómo se integran telecomunicaciones, movilidad y tecnología para sostener la explosión de la IA sin colapsar redes eléctricas a nivel global. Conclusión general: hacia una tecnología más consciente El episodio del 6 de marzo de 2026 captura un momento bisagra. La inteligencia artificial local (CoPaw) permite privacidad sin sacrificar productividad; GPT‑5.4 amplía el contexto a niveles impensables hace apenas un año; la MacBook Neo democratiza el acceso a macOS; el conflicto Irán‑Israel muestra cómo la desinformación generada por IA paraliza la comprensión pública mientras la censura selectiva oculta la realidad; y el Mobile World Congress 2026 consagra la privacidad, la seguridad satelital y el 6G como pilares del futuro móvil. Motorola abandona Android por GrapheneOS. Llegan teléfonos con Linux a Europa. MediaTek integra la conectividad satelital en chips 5G. Audífonos inteligentes analizan la geometría auditiva. Robots humanoides bailan moonwalk. Nokia y NVIDIA sientan las bases para 6G. De forma simultánea, la geopolítica y la desinformación revelan que una IA sin restricciones éticas se convierte en arma de control masivo. El desafío de 2026 no es tecnológico, sino humano: elegir entre la conveniencia monitoreada y la privacidad consciente. Las alianzas hacia 6G establecerán quién controla la infraestructura digital del planeta. La censura en redes sociales demuestra que la verdad es tan escasa como la paz. Y herramientas como CoPaw ofrecen una alternativa: control total de tus datos sin depender de corporaciones dispuestas a negociar su ética a cambio de contratos militares. Escucha el episodio completo en One Digital y únete a la conversación con los hashtags #PodcastONE, #OneDigital y #MWC2026. El cargo Podcast ONE: 6 de marzo de 2026 apareció primero en OneDigital.
In this episode of The Defiant Podcast, Camila Russo sits down with Jing Wang to discuss how Optimism is evolving and why the debate over what counts as a “real” Ethereum L2 might be missing the point.Jing argues that the most important question isn't whether a chain is an L1, L2, or sidechain. It's whether the architecture actually serves users and real-world use cases.“If it looks like an L1, we'll build that. If it looks like an L2, we'll build that.”In the conversation we cover:Why Optimism now sees itself as a network of blockchains (the Superchain)The debate around Ethereum L2 decentralization sparked by Vitalik ButerinWhy institutions are already using decentralized railsWhy ZK proofs are the futureAnd why Jing believes finance inevitably moves on-chainNexo is a premier digital assets wealth platform that helps clients build, manage, and preserve their wealth through advanced interest-generating products, crypto-backed credit, advanced trading tools, and 24/7 client care. Get started at https://nexo.com/defiant Your Web3 product deserves solid payment infrastructure. Global on/off-ramps, custom APIs, and DeFi connectivity trusted by the biggest names in crypto: https://mercuryo.io/
In this talk, Aditya, an experienced AI Researcher and Engineer, shares his technical evolution—from his roots in embedded systems to building complex, large-scale AI agent architectures. We explore the practical challenges of enterprise AI adoption, the shifting economics of LLMs, and the infrastructure required to deploy reliable multi-agent systems.You'll learn about:- The ROI of Fine-Tuning: How to decide between specialized small models and general-purpose APIs based on cost and latency.- Agent MLOps Stack: The essential roles of guardrails, data lineage, and auditability in AI workflows.- Reliability in High-Stakes Verticals: Navigating the unique AI deployment challenges in the legal and healthcare sectors.- Evaluation Frameworks: How to design robust evals for multi-tenancy systems at scale.- Human-in-the-Loop: Strategies for aligning "LLM as a judge" with human-labeled ground truth to eliminate bias.- The Future of AGI: What to expect from the next wave of multimodal agents and autonomous systems.TIMECODES: 00:00 Aditya's from embedded systems to AI08:52 Enterprise AI research and adoption gaps 13:13 AI reliability in legal and healthcare 19:16 Specialized models and agent governance 24:58 LLM economics: Fine-tuning vs. API ROI 30:26 Agent MLOps: Guardrails and data lineage 36:55 Iterating on agents with user feedback 43:30 AI evals for multi-tenancy and scale 50:18 Aligning LLM judges with human labels 56:40 Agent infrastructure and deployment risks 1:02:35 Future of AGI and multimodal agentsThis talk is designed for Machine Learning Engineers, Data Scientists, and Technical Product Managers who are moving beyond AI prototypes and into production-grade agentic workflows. It is especially relevant for those working in regulated industries or managing high-volume API budgets.Connect with Aditya:- Linkedin - https://www.linkedin.com/in/aditya-gautam-68233a30/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/
Woodson Martin, CEO ofOutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions. Current enterprise use cases for agents include document processing, decision support, and personalized outputs. When integrated into broader systems, these applications can deliver measurable productivity gains. For example,Travel Essencebuilt an agentic system that reduced a two-hour customer planning process to three minutes, allowing staff to focus more on sales and helping drive 20% top-line growth. Martin also believes AI will pressure traditional SaaS seat-based pricing and accelerate custom software development. In this environment, governed platforms like OutSystems can help enterprises adopt “vibe coding” while maintaining compliance, security, and lifecycle management. Learn more from The New Stack about the latest developments around enterprise adoption of vibe coding: How To Use Vibe Coding Safely in the Enterprise 5 Challenges With Vibe Coding for Enterprises Vibe Coding: The Shadow IT Problem No One Saw Coming Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
From WEDI's 2026 Winter Forum, Michael chats with three payer representatives who discuss how access APIs are improving the patient experience by making data easier to access, use, and share across care journeys. Tom Loomis, Enterprise Architecture- Interoperability, Evernorth Nancy Bevin, Director, Provider Connectivity, Medica Ron Wampler, Executive Director, Interoperability, Aetna, a CVS Health Company
Luis y Albert se sientan a hablar de cómo están usando la inteligencia artificial en su día a día como media buyers en 2026.Nada de teorías: herramientas concretas, workflows que ya aplican con clientes y una comparativa honesta de qué IA merece tu dinero y cuál no.En este episodio aprenderás:
The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually
In this episode, Lex chats with Yoshi Yokokawa, CEO of Alpaca — a brokerage infrastructure company that provides API-based trading and custody services to fintechs and developers globally. The conversation begins with their shared experience at Lehman Brothers during the 2008 financial crisis, where Yoshi worked in fixed income securitization and learned that even when market participants sense a bubble, they keep dancing because timing the exit is impossible. After Lehman's collapse, Yoshi pursued entrepreneurship, building a computer vision AI company acquired by Kyocera before founding Alpaca in 2017. Initially inspired by Robinhood, Yoshi pivoted after experiencing firsthand the friction of accessing brokerage infrastructure—realizing the deeper opportunity was building API-first brokerage rails for developers. Today Alpaca powers 9 million accounts through 300+ partners across 45 countries, recently raising $150 million at a unicorn valuation. The discussion explores how Alpaca follows Robinhood's product roadmap to anticipate partner demand, the challenges of adding crypto, and Yoshi's thesis that finance is undergoing a generational shift from digital to on-chain operations. Lex shares examples of legacy infrastructure dysfunction—from faxing PDFs to TD Ameritrade in 2012 to the Synapse collapse caused by manual CSV uploads—illustrating why Alpaca built its own custody and ledger systems as a path to competing in the $350 trillion global securities custody market. NOTABLE DISCUSSION POINTS: Alpaca's biggest breakthrough was not a better investing app idea, but recognizing that the real bottleneck was brokerage infrastructure. Yokokawa and team initially explored B2C product concepts, but pivoted once they experienced firsthand how painful broker-dealer setup, custody, and clearing integrations were. For readers building fintech, this is a huge lesson: the highest-value opportunity is often the “invisible” infrastructure pain, not the user-facing feature set. They found product-market fit by starting with a narrow wedge (API for automated traders) and only then expanding into a broader platform (Broker API for fintech apps). Alpaca did not begin by serving large fintechs; it first attracted power users who urgently needed programmable execution, then used inbound demand (“can I build my own Robinhood?”) as proof to build account opening, reporting, and full brokerage APIs. This is a valuable go-to-market pattern for infrastructure startups: win with a sharp use case, then expand into the system of record. Yokokawa's core strategic edge is full-stack control of licenses, memberships, and ledger technology rather than relying on legacy vendors. He explicitly ties this to lessons from historical fintech fragility (manual workflows, broken reconciliations, middleware failures) and argues that owning the custody/clearing layer is what makes Alpaca defensible long term. For readers, this is the key takeaway on moat-building in financial services: if you don't control the ledger and operational core, your product may scale faster at first but remains structurally fragile. TOPICS Alpaca, Lehman Brothers, Barclays, Nomura, Neuberger Berman, Blackrock, Robinhood, Interactive Brokers, TD Ameritrade, BNY Mellon, Brokerage infrastructure, API, trading, tokenization, embedded finance, fintech, crypto, web3 ABOUT THE FINTECH BLUEPRINT
Live from the GNE Mainstage: GSMA's Henry Calvert explores how standardized network APIs, quality on demand, and fixed-mobile orchestration are turning connectivity into a monetizable platform. As AI scales, application-led connectivity and cross-industry collaboration become essential to delivering real enterprise value. In this Executives at the Edge episode from the GNE Mainstage, Henry Calvert of GSMA... Read More The post APIs, Agents, and Monetization: Enter the B2B2Agent Era appeared first on Mplify.
In this episode of Between Product and Partnerships, Biljana Pecelj joins Cristina Flaschen to explain how smaller teams successfully ship integrations with larger platform partners. She makes the case that leveraging usage data and performance metrics is the key to proving your integration's value, giving you the necessary influence to move up a major partner's priority list.Biljana shares lessons from her experience managing integrations at Hootsuite during major platform shifts, including the rise of Instagram Business APIs and the emergence of new features like Stories that didn't always come with immediate API support. She also details the process of aligning internal stakeholders to ensure integration features actually ship despite shifting external APIs.The conversation also covers the operational side of integrations, this includes why observability needs to be built early, how teams detect silent failures before customers do, and how to structure internal alignment when integration work touches engineering, legal, partnerships, and revenue.Who we sat down withBiljana Pecelj is a Principal Product Manager at Ledgy with deep experience building integrations inside platform-heavy environments. She has worked extensively on partnership-driven product initiatives where execution speed depends on navigating both technical constraints and external partner relationships.Biljana brings expertise in:Building integrations in environments where APIs and features evolve asynchronouslyDesigning for observability and proactive monitoringNavigating asymmetric partner relationshipsAligning roadmap priorities across product, partnerships, legal, and engineeringManaging tradeoffs between beta opportunities and engineering capacityKey TopicsWhy integration product work is relationship workTechnical execution matters, but alignment with partners determines whether integrations actually ship and scale.Building in ecosystems you don't controlAPIs change. Features launch without endpoints. Roadmaps shift. Successful teams anticipate uncertainty rather than assume stability.The importance of observability from day oneSilent failures are common in integrations. Without monitoring, teams often learn about outages from customers instead of systems.Roadmap tradeoffs when beta opportunities ariseNew partner features can require immediate shifts in engineering priorities. Negotiation and resource reallocation become core product skills.M&A and integration complexityBrand consolidation rarely means backend integration. Teams often inherit layered systems that remain technically independent long after acquisition.Episode Highlights01:55 – How integration product management differs from core product work04:40 – Navigating power imbalances with large platform partners07:15 – Using data to strengthen partner conversations10:30 – Building observability when resources are limited13:45 – Handling silent integration failures17:50 – Managing beta features and roadmap shifts21:30 – Aligning cross-functional teams around integration priorities24:45 – Why relationships accelerate integration execution28:10 – Lessons learned from building inside platform ecosystems--For more insights on partnerships, ecosystems, and integrations, visit www.pandium.com
Today, we are continuing our series, entitled Developer Chats - hearing from the large scale system builders themselves.In this episode, we are talking with Oleksandr Piekhota, Principal Software Engineer at Teaching Strategies. Oleksandr helps to show us at what point of scale platform approaches are required, when to run experiments and when to stop, and perhaps more importantly - engineering ownership beyond the code.QuestionsYou've moved from hands-on engineering into principal and technical leadership roles, working on architecture and platforms.At what point did you realize your work was no longer about individual features, but about the system as a wholeAcross several projects, growth didn't break functionality — it exposed architectural limits.Can you recall a moment when it became clear that shipping more features wouldn't solve the problem, and a platform approach was required?You've designed and supported APIs end-to-end, from architecture to real customers. How do you distinguish between an API that simply works and one that can truly support business scale?Internal systems like invoicing and HR workflows began as automation, but evolved into real products.What tells you that an internal tool is worth developing seriously rather than treating as a temporary workaround?In R&D, you explored CI/CD automation, server-less, and infrastructure experiments — not all reached production. How do you decide when an experiment should continue, and when it's no longer worth the engineering cost?You've hired teams, set standards, and shaped long-term technical direction. At what point does an engineer stop being a contributor and start owning business-level outcomes?You contributed to open-source tools that later became part of your company's infrastructure. Why do you see open source contributions as part of serious engineering work rather than a side activity?Looking across your projects, how do you now recognize a truly mature engineering system? Is it code quality, process, or how teams respond when things go wrong?If we look five to seven years into the future, which architectural assumptions we treat as “standard” today are most likely to turn out to be naive or limiting?SponsorsIncogniLinkshttps://www.linkedin.com/in/oleksandr-piekhota-b675ba53/https://teachingstrategies.com/Support this podcast at — https://redcircle.com/codestory/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Software Engineering Radio - The Podcast for Professional Software Developers
Marc Brooker, VP and Distinguished Engineer at AWS, joins host Kanchan Shringi to explore specification-driven development as a scalable alternative to prompt-by-prompt "vibe coding" in AI-assisted software engineering. Marc explains how accelerating code generation shifts the bottleneck to requirements, design, testing, and validation, making explicit specifications the central artifact for maintaining quality and velocity over time. He describes how specifications can guide both code generation and automated testing, including property-based testing, enabling teams to catch regressions earlier and reason about behavior without relying on line-by-line code review. The conversation examines how spec-driven development fits into modern SDLC practices; how AI agents can support design, code review, documentation, and testing; and why managing context is now one of the hardest problems in agentic development. Marc shares examples from AWS, including building drivers and cloud services using this approach, and discusses the role of modularity, APIs, and strong typing in making both humans and AI more effective. The episode concludes with guidance on rollout, evaluation metrics, cultural readiness, and why AI-driven development shifts the engineer's role toward problem definition, system design, and long-term maintainability rather than raw code production. Brought to you by IEEE Computer Society and IEEE Software magazine.
Get 90 days of Fellow free at Fellow.ai/coo In this episode, Michael Koenig speaks with Greg Keller, co-founder and CTO of JumpCloud, about identity access management and why it's becoming one of the most important operational systems in the age of AI. Greg explains how traditional identity systems were designed for office-based companies running Microsoft infrastructure and why that model broke as companies moved to SaaS, cloud infrastructure, and remote work. The discussion then turns to the next big shift: the rise of AI agents and synthetic identities inside organizations. As companies deploy more AI tools, the number of machine identities may soon outnumber human employees. Managing what those systems can access will become a critical security and operational challenge. Topics Covered What a CTO actually does Greg explains the different types of CTO roles and how technology leaders help companies anticipate where the market is headed. Identity Access Management explained simply IAM answers three core questions inside every company: Who are you? What can you access? How is that access managed? Why the old IT model broke Traditional identity systems were built for on-premise offices and Microsoft infrastructure. Modern companies now operate across: SaaS applications cloud infrastructure remote work environments multiple operating systems How JumpCloud approaches identity JumpCloud was built to manage identity across devices, applications, and infrastructure regardless of platform. Where Okta fits in the ecosystem Okta helped modernize browser-based authentication through Single Sign-On, while JumpCloud focuses on broader identity infrastructure. AI, Security, and Synthetic Identities Why COOs should push AI adoption Greg argues AI adoption is no longer optional. Companies must encourage teams to improve productivity and efficiency using AI. The rise of synthetic identities AI agents, bots, APIs, and service accounts are becoming new actors inside companies that require identity governance. Bots may soon outnumber employees Organizations will soon manage more machine identities than human ones. AI as a potential insider threat AI systems can become security risks if they are granted excessive permissions or misinterpret policies. The API key governance problem Many AI integrations rely on API keys, which are often poorly managed and can create hidden security risks. Key Takeaway As companies adopt AI, identity access management becomes the control layer that determines what both humans and machines are allowed to do inside the organization. The companies that manage identity well will move faster and operate more securely. Links: Michael on LinkedIn: https://linkedin.com/in/michael-koenig514 Greg on LinkedIn: https://www.linkedin.com/in/gregorykeller/ JumpCloud: https://jumpcloud.com/ Between Two COO's: https://betweentwocoos.com Episode Link: https://betweentwocoos.com/ai-agents-identity-access-greg-keller
In this episode, we debrief the second annual Heatpunk Summit from the legendary Hashtub in Denver. We recap how builders from HVAC, hydronics, and home mining came together to advance hashrate heating—complete with live hardware demos, workshops, and a brutally constructive critique of our boiler setup from a pro hydronics engineer. We dig into galvanic corrosion gotchas, smarter system design, and why practical, hands-on education is the real unlock for bringing Bitcoin miners back into homes and businesses as useful heaters.We also break down the big development with Canaan's openness to support the home-mining and heat reuse market, what a “willing partner” ASIC manufacturer could mean for decentralization, and how small improvements—docs, APIs, and integrations—can catalyze a whole ecosystem. From workshop highlights (Home Assistant control, hydronics integration, open-source mining OS, and regulatory/insurance insights) to the industry's AI pivots and the investability of open source, this is a high-signal builder's recap with clear next steps and renewed momentum for hashrate heating.
In this engaging episode of MSP Business School, host Brian Doyle sits down with Shane Naugher, a pioneering figure in the world of AI and automation for MSPs. The discussion takes a deep dive into the real-world application of AI, focusing on how it can be utilized to streamline operations and deliver tangible ROI for businesses. Whether you're curious about how AI fits into your MSP strategy or eager to learn about automation opportunities, this episode delivers practical insights into what Shane calls the "mature business model" of MSPs. As the conversation unfolds, Shane shares his dual expertise as the CEO of DaZZee IT Services and founder of Innovative Automations, offering a rare glimpse into the intersection of AI, automation, and managed services. The episode explores the challenges of integrating AI into everyday business operations, shedding light on how AI-enabled automations can transform traditional processes, particularly in professional services and industries reliant on legacy systems. Shane shares valuable experiences and success stories, highlighting key automation opportunities and the significance of partnering with trusted AI advisors to navigate the rapidly evolving tech landscape. Key Takeaways: Practical AI Application: Understanding the difference between shiny AI tools and meaningful automation that drives business outcomes. Industry-Specific Automation: How different sectors, particularly professional services, can benefit from AI to achieve significant ROI. The Role of APIs: Leveraging open APIs and traditional RPA platforms for connecting disparate business applications and optimizing workflows. Partnership Model: The importance of MSPs partnering with AI and automation specialists to provide comprehensive client solutions. Strategic AI Conversations: Encouraging MSPs to lead AI integration discussions with clients to maintain a competitive edge. Guest Name: Shane Naugher LinkedIn page: https://www.linkedin.com/in/shanenaugher/ Company: Innovative Automations / DaZZee IT Website: https://innovativeautomations.ai/ / https://dazzee.com/ Show Website: https://mspbusinessschool.com/ Host Brian Doyle: https://www.linkedin.com/in/briandoylevciotoolbox/ Sponsor vCIOToolbox: https://vciotoolbox.com
What if innovation is not about moving faster, but moving with purpose? In this episode of Innovators Inside, Ian Bergman sits down with Dr. Hisham Alasad, head of innovation enablement at Qatar Airways, to unpack a human-first view of innovation shaped by fintech, academia, and a bold move to Qatar. They break down what open banking really changes, why banks fight it, and how open finance could unlock better, cheaper products for consumers. Then they go deeper: why innovation requires overcoming fear, why closed systems stall progress, and what a “Responsible Innovation” framework could look like that is ethical, inclusive, scalable, and beneficial beyond the balance sheet. They close with a big vision: using AI to help create opportunity and peace in the Middle East.Topics & Timestamps
Wes and Scott talk about building v_framer, Scott's custom multi-source video recording app, and why Electron beat Tauri and native APIs for the job. They dig into MKV vs WebM, crash-proof recording, licensing with Stripe and Keygen, auto-updates, and the real challenges of shipping a polished desktop app. Show Notes 00:00 Welcome to Syntax! March MadCSS 02:28 Why screen recording apps are so frustrating 07:14 The requirements behind Scott's app, v_framer 09:47 Tauri, WKWebView, and blurry screen recording headaches 13:00 Why switching to Electron was a game changer 14:02 Electrobun and the hybrid desktop experiment 16:29 Browser-based capture vs native APIs 18:50 Brought to you by Sentry.io 22:32 Notarization, certificates, and shipping a Mac app 24:52 One-time purchases, trials, and selling desktop software 26:37 Self-hosting Keygen for license keys 30:27 A scrappy Google Sheets-powered waitlist 31:56 Keyboard shortcuts, FPS locks, and app customization 34:50 CI/CD and painless auto-updates with Electron Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Show DescriptionWe talk with Frederik Braun from Mozilla about the Sanitizer API, how it works with HTML tags and web components, what it does with malformed HTML, and where CSP fits in alongside the Sanitizer API. Listen on WebsiteWatch on YouTubeGuestsFrederik BraunGuest's Main URL • Guest's SocialSecurity engineer and manager working on the Mozilla Firefox web browser Links Frederik Braun: Why the Sanitizer API is just setHTML() Frederik Braun freddyb (Frederik B) SponsorsBluehostDo you ever feel like pre-configured hosting is slowing you down? That is where VPS hosting starts to make a lot more sense. With Bluehost VPS, you are not stuck inside someone else's environment. You get full control of the server. You can spin up Docker, deploy containerized apps, run workflows, and connect your CRM, databases, and APIs without weird restrictions. No shared bottlenecks. No artificial limits. If you want to actually own your stack, your data, your performance, your roadmap, VPS is the move.
In this episode of the Wharton FinTech Podcast, Bobby Ma sits down with Kyle Mack, CEO & Co-Founder of Middesk, a Series B company. Kyle shares his experience building Middesk, the leading business identify platform modernizing business verification, risk evaluation, and compliance. Its fast, frictionless APIs support KYB, credit assessment, and tax registration use cases, with data updated in days, not months. More than 500 customers trust Middesk to verify, underwrite, and grow with confidence. The company has raised over $70 million in funding and is backed by top-tier investors including Accel, Sequoia, and Insight Partners. We discuss: - Kyle's journey building Middesk starting from developing proprietary data pipelines to creating a leading business identity platform - The value proposition of KYB and how it is fundamentally more complex than KYC - How Middesk serves and plugs into its customers' decisioning workflows -The future of business identity as it evolves with AI and other technology trends
What if public health agencies could access better, faster, and more complete data without giving up control? In this episode, we sit down with Dr. Jen Layden, senior vice president of population and innovation at ASTHO, to explore the new Public Health Data Consortium and what it means for the future of public health decision-making. Dr. Layden explains how this unique public–private partnership is designed to improve data access, quality, and analytics while keeping governance firmly in the hands of state and territorial health agencies. She discusses why mortality data is a critical starting point, how emerging technologies like APIs and advanced analytics can help close long-standing data gaps, and what new insights could come from linking public health data with sources like pharmacy, claims, and real-world data.Leadership Power Hour: Your Launchpad for Impact | ASTHO
I am thrilled to welcome Marenza Altieri Douglas, an executive in sales and technology. She's trained in structured enterprise environments, start ups, and is steeped in opening new markets and building commercial enterprise. That's not going to be our focus today, instead we talk about how she is an incredible storyteller, rooted in concepts like disruption and cultivation. Her personal story is key to the narrative, and I was thrilled she is joining us to share that story and how she ties it all together, leading and operating in the current business climate. Marenza Altieri Douglas' career sits at the intersection of technology evangelism and disciplined execution. Trained in structured, enterprise environments and refined in startups and scale-ups, she specializes in defining strategic direction, opening new markets, and building compelling commercial propositions for enterprise and C-suite customers across Fortune 500 and Global 5000 organizations. She has worked across and alongside technologies including Conversational and Generative AI, APIs, DevOps, open-source platforms, cloud and containerized architectures, enterprise mobility, security, communications, media and broadcast, telecoms, and digital platforms. AI is a natural evolution of this journey, alongside a strong strategic interest in GPU-enabled infrastructure and quantum technologies. Marenza is known for building high-trust relationships, spotting and growing talent, and connecting product, engineering, and commercial teams around clear outcomes. A natural storyteller and facilitator, I enjoy shaping narratives that help organizations and customers understand why a technology matters, not just what it does.(4:50) We delve into Marenza's formative years that put her on her current path. She shares her personal and professional story. (17:18) When did Marenza realized that “disruption” and challenging things become a part of her brand? (22:38) What does Marenza feel are some of the important qualities that people should embody? (28:20) Marenza shares how she focuses on the future and the next generation. (39:16) We reflect on what Marenza would like her impact to be over the next couple of years.Connect with Marenza Altieri-Douglashttps://www.linkedin.com/in/marenza/ Subscribe: Warriors At Work PodcastsWebsite: https://jeaniecoomber.comFacebook: https://www.facebook.com/groups/986666321719033/Instagram: https://www.instagram.com/jeanie_coomber/Twitter: https://twitter.com/jeanie_coomberLinkedIn: https://www.linkedin.com/in/jeanie-coomber-90973b4/YouTube: https://www.youtube.com/channel/UCbMZ2HyNNyPoeCSqKClBC_w
Host: Annik Sobing Guest: Kenneth G. Peters Published: February 2026 Length: ~20 minutes Presented by: Global Training Center GTM Software Prep: Don't Install Until You've Done These 3 Things First In this Simply Trade Roundup, Annik talks with Kenneth G. Peters, President at MIC US and Director of Commercial Operations in North America, about Global Trade Management (GTM) software—specifically, what trade teams must do before implementation to avoid creating “digital chaos.” Ken shares real talk from his ATCC presentation on data cleanup, process mapping, and testing, plus why “cleaning your data like you're hosting the in-laws” is now his signature advice. Shoutout to Alison for the killer slides. What You'll Learn in This Episode Ken's new grandpa status (the little guy is 7 months old—congrats!) and why it's the “next step in life” that keeps him energized for trade tech. The #1 mistake companies make with GTM software Data cleanup first: Don't dump junk into GTM. Scrub inactive vendors, obsolete parts, invalid HS codes (like 111111 or all zeros). Clean it like you're hosting the in-laws—no mess allowed. Why: GTM amplifies what you give it. Bad data in = faster mistakes out. Avoid the “Big Bang” implementation trap Don't try to do everything at once (denied party screening + classification + FTA rules + solicitation). Start small: Classification (builds the foundation—parts, HS codes, values). Denied party screening (uses your vendor/part data). FTA analysis (relies on classification/HS from step 1). Why: Master data dependencies mean you build once and reuse everywhere. Processes over pixels GTM won't fix broken workflows. Map your processes before going live. If your current setup is emailing Excel files between systems, you're not automating—you're digitizing chaos. True automation: ERP ↔ GTM via SFTP, APIs, XML—no human hands on keyboards. Reduces errors, speeds everything up. Who owns what after go‑live MIC US (GTM provider): Manages the software backend—reg updates, HS databases, platform maintenance. Your team: Owns the process (classification, entry creation, decision‑making). Someone still reviews outputs for accuracy. No “managed services” from MIC—GTM is a tool, not a full‑service outsource. Testing: where most implementations fail Allocate real time and resources to testing—don't rush it. Test end‑to‑end: data flow, workflows, edge cases. Why: Skipped or rushed testing = live problems that cost more to fix later. “If your systems are emailing Excel files to each other, you're not automating” Ken's golden rule: Hands‑off data flow (ERP → GTM) eliminates errors. Excel handoffs = manual errors waiting to happen. Key Takeaways Clean data first: Active parts, valid HS, no ghosts—GTM makes good data shine and bad data explode. Start small, build smart: Classification → screening → FTA, not “big bang everything.” Fix processes before pixels: GTM won't save broken workflows; it speeds them up. Testing = non‑negotiable: Rushed testing = expensive live fixes. GTM is a force multiplier—if your foundation is solid. Credits Host: Annik Sobing Guest: Kenneth G. Peters, President, MIC US Producer: Annik Sobing Listen & Subscribe Simply Trade main page: https://simplytrade.podbean.com Apple Podcasts: https://podcasts.apple.com/us/podcast/simply-trade/id1640329690 Spotify: https://open.spotify.com/show/09m199JO6fuNumbcrHTkGq Amazon Music: https://music.amazon.com/podcasts/8de7d7fa-38e0-41b2-bad3-b8a3c5dc4cda/simply-trade Connect with Simply Trade Podcast page: https://www.globaltrainingcenter.com/simply-trade-podcast LinkedIn: https://www.linkedin.com/showcase/simply-trade-podcast YouTube: https://www.youtube.com/@SimplyTradePod Join the Trade Geeks Community Trade Geeks (by Global Training Center): https://globaltrainingcenter.com/trade-geeks/
En el episodio 200 de BIMrras nos metemos en terreno peligroso. ¿Y si en vez de usar un CDE lo programamos? ¿Y si dejamos de tratarlo como una marca comercial y empezamos a entenderlo como lo que realmente es: un sistema de reglas para gestionar información? Hablamos de qué significa construir un CDE Open Source, de traducir la ISO 19650 a código, de dejar de confundir software con metodología y de lo incómodo que resulta descubrir que no entendemos tan bien el sistema que usamos todos los días. Porque automatizar el caos no lo ordena. Lo acelera. Un episodio sobre soberanía digital, procesos y responsabilidad. Y sobre una idea que puede doler un poco: si no entiendes tu CDE, no estás gestionando información. Estás alquilando comodidad. Bienvenido al episodio 200 de BIMrras! Contenido del episodio: 00:00:00 Introducción y celebración del episodio 200 00:06:00 Origen del proyecto de CDE Open Source 00:10:30 Adaptar el software a la forma de trabajar 00:28:00 CDE como infraestructura frente a plataforma cerrada 00:38:30 Control de versiones y problemas con archivos IFC 00:45:00 Gestión de datos frente a gestión de archivos en BIM 00:57:00 Interoperabilidad, APIs y automatización 01:02:00 Seguridad y copias de respaldo 01:07:00 Open Source y soberanía del dato 01:13:00 Responsabilidades y trazabilidad en el proyecto
John V, AI risk, safety, and security at the Institute for Security and Technology (IST), joins Defender Fridays today. John's work spans AI red teaming, adversarial machine learning, AI evals and validation, and AI risk assessment, including policy work at the intersection of AGI and nuclear strategic stability. Learn more at https://securityandtechnology.org/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie
In this episode of The Ross Simmonds Show, Ross breaks down the so-called “SaaSpocalypse” after $1 trillion in SaaS market cap vanished in a single week. While headlines scream that “AI will replace SaaS,” Ross argues the reality is far more nuanced. He introduces a three-part framework ; Exposed, Embedded, Evolved , and outlines the strategic shifts founders and marketers must make to survive and compound in the age of AI agents. Key Takeaways and Insights: 1. The $1 Trillion Wake-Up Call -SaaS stocks were crushed in early 2026, triggering fear across markets. -AI agents, LLM advancements, and disappointing earnings accelerated the correction. -The dominant narrative says AI will replace SaaS , but the situation is more complex. -Market fear is loud. Structural change is quieter, but very real. 2.AI Agents, Vibe Coding & the Death of Per-Seat Pricing? -AI agents interacting directly with APIs challenge traditional SaaS interfaces. -“Vibe coding” demonstrates how quickly software can now be replicated. -Per-seat pricing models are under pressure as automation scales output. -The interface is shifting from dashboards to conversations. 3.The Data Reality Most People Ignore -Global SaaS spending is projected to grow from $318B (2025) to $500B+ (2028). -Enterprise contracts and deep dependencies don't disappear overnight. -Pricing models may change. Market leaders may change. -Software demand isn't vanishing, it's evolving. 4.The Extinction Stack: Exposed, Embedded, Evolved -SaaS companies fall into three survival tiers. -Not all SaaS companies face equal risk. -Your future depends on depth of integration and data moat. -Operators must identify where they sit, now. 5.Type 1: The Exposed -Horizontal point solutions with weak moats and low switching costs. -Easily replicated with AI tools in days or weeks. -Rely on habit rather than proprietary advantage. -Most vulnerable to margin compression and churn. 6.Type 2: The Embedded -Deeply integrated systems of record inside enterprises. -Painful and complex to replace due to migration risk. -The risk isn't extinction ,it's interface disruption. -Must become AI-first before agents abstract them away. 7. Type 3: The Evolved -AI-native or aggressively AI-integrated platforms. -Built on proprietary data, regulatory moats, and deep user memory. -AI increases the value of their data advantage. -Positioned not just to survive, but accelerate. 8.Distribution Is the New Defensive Moat -AI can replicate features. It cannot replicate trust. -Brand equity, audience relationships, and distribution compound. -As product development gets cheaper, distribution becomes the advantage. -This is the moment to double down on quality and amplification. 9.From Time-Based to Outcome-Based Thinking -Per-seat and time-based pricing models face structural pressure. -The future favors outcome-driven pricing and accountability. -Buyers will demand measurable impact, not access. -Service businesses must shift from hours sold to results delivered. 10. Intentional AI vs Fear-Based AI -Two types of teams are emerging: intentional adopters and reactive adopters. -AI without process creates noise, not leverage. -10,000 mediocre AI assets won't move the needle. -10 strategic, AI-enabled assets can change a business trajectory. —
In this episode of Scene from Above, Julia Wagemann speaks with Matthias Mohr, independent software developer and one of the key contributors to the STAC (SpatioTemporal Asset Catalog) and STAC API specifications. STAC has become foundational to how Earth observation data is discovered and accessed across cloud platforms. But its origins lie in a fragmented landscape of portals, inconsistent metadata, and incompatible APIs. Matthias shares how STAC emerged from practical needs within the community and how it evolved into a widely adopted standard for geospatial data discovery. Together, Julia and Matthias unpack: Why STAC was created and what problem it solved The difference between static STAC catalogues and STAC APIs How organisations struggle when adopting STAC internally The role of extensions and interoperability Where cloud-native geospatial infrastructure may head next A thoughtful conversation for anyone working with large-scale Earth observation data, from analysts querying data, to engineers publishing catalogues, to decision-makers shaping data infrastructure. Host: Julia Wagemann Guest: Matthias Mohr
Vitalik outlines Ethereum's Post-Quantum roadmap. Ethereum researchers introduce the leanSig signature scheme. Alchemy releases crypto APIs for agents. And Brevis reduces RTP costs on its ZKVM. Read more: https://ethdaily.io/892 Borrow against ETH at the lowest fixed rates in DeFi. Liquity V2 lets you use ETH as collateral to mint BOLD, the Ethereum native dollar. Learn more at liquity.org Disclaimer: Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.
As has become tradition, the guys are delighted to welcome back special guest Analyst Dean Bubley, to look ahead to the main themes of this year's big telecoms trade show. They start with the obvious – AI – but strive to inject focus and substance into this ubiquitous and often hyperbolic topic by discussing agents, automation, and APIs. They then move on to the matter of sovereignty and how viable it is for countries to become more self-sufficient in a time of ultra globalisation. The final big theme is satellite telecoms, with direct-to-device likely to be a hot topic at the show. They conclude by examining recent conjecture on the effect AI will have on the world and its workers.
Send a textStablecoin yield doesn't have to mean complexity, counterparty mystery, or a leap of faith. We sit down with Jeff Handler, co‑founder and CCO of OpenTrade, to unpack how enterprise‑grade infrastructure turns on‑chain dollars into real returns, why tokenization only matters when it solves a user's problem, and how crypto‑native strategies like delta neutral Solana staking can deliver yield without riding the market's mood swings.Jeff walks us through his journey from early Bitcoin wallets to USDC's formative years, then into building a platform that looks more like SaaS than a protocol. We dig into the operations hiding behind clean APIs: bank‑grade asset management, reporting, and legal structures that meet treasury standards. If you've wondered how fintechs, exchanges, and neobanks can keep funds on chain while accessing money market exposure or hedged staking strategies, this is the blueprint.We also get practical about adoption. Trust is earned through credible investors and counterparties, but it's cemented with enforceable contracts, account controls, and bankruptcy‑aware structures. For product teams, the takeaway is clear: avoid vanity metrics, pursue product‑market fit, and accept that real usage trails real utility. On regulation, Jeff advocates a proven path—operate responsibly under existing laws, engage policymakers, and keep shipping rather than waiting for a perfect rulebook.To close, we explore how embedded yield becomes a retention and growth engine. With configurable terms, rates, and minimums, teams can shape offerings to reduce churn or boost balances while keeping a “stablecoins in, stablecoins out” experience. If you're building in fintech or web3 and need a clear, compliant, and scalable way to deliver yield, this conversation will sharpen your roadmap. Enjoy the episode, then subscribe, share with a teammate, and leave a quick review so others can find it too.This episode was recorded through a Descript call on January 30, 2026. Read the blog article and show notes here: https://webdrie.net/stablecoin-yield-without-the-headache..........................................................................
Episode 300 is a milestone we never imagined when we hit record for the first time in 2018 at 470 Claims, which was acquired by Alacrity Solutions. Seven years later, Rob and Lee sit down to reflect on how FNO: InsureTech began, how it evolved, and what has surprised us most along the way. In this special episode, we talk through the genesis of the podcast and how a simple idea turned into hundreds of conversations across insurance and insuretech. We reflect on the consistency, curiosity, and commitment it took to keep showing up week after week, and how the journey shaped us as hosts. Key Highlights • [3:19] The genesis of FNO: InsureTech and how the podcast started in 2018 • [9:03] The unexpected relationships and networking that grew from the show • [25:42] How insuretech conversations evolved from APIs to AI • [29:36] The industry shifts between 2018 and 2025 that quietly changed startup thinking • [31:09] Reflections on seven years of recording and the plans ahead We are grateful to everyone who has listened, shared, or joined us as a guest, and to our sponsor Alacrity Solutions for supporting us all these years. Cheers to the next 100!
In this episode of Valley of Depth, we dive into Aalyria's newly announced $100 million raise at a $1.3 billion valuation with cofounder and CTO Brian Barritt and unpack why investors are betting big on the future of networks that don't sit still. Aalyria is building two core technologies born inside Google: Spacetime, a software orchestration layer designed to manage networks in motion, and Tightbeam, a laser communications system delivering fiber-like speeds through the atmosphere. Together, they aim to solve one of the hardest infrastructure challenges in aerospace and defense: how to coordinate satellites, aircraft, drones, ships, and ground systems into a seamless “network of networks.” The conversation spans laser physics, diffraction challenges in space-to-ground links, feeder link bottlenecks in mega-constellations, and why routing data across moving infrastructure is fundamentally different than routing across fixed networks. We cover: Why Aalyria's $100M raise signals a shift from R&D to deployment What “network in motion” really means and why it's so hard How laser communications can reach 100 gigabits per second through atmosphere The technical challenge of Earth-to-space vs. space-to-Earth optical links Why interoperability has been a 40-year ambition inside the DoD How open APIs could become the connective tissue for JADC2 and beyond What resilience and roaming look like in hybrid satellite architectures Why optical ground stations require orchestration software to scale • Chapters • 00:00 - Intro 00:59 – The history of Aalyria 02:47 – Aalyria's Spacetime 06:09 – Building the connective software stack that links all of Aalyria's technology together 07:12 – The non-geostationary network problem 11:12 – The rebirth of Loon Technology 14:50 – How Tightbeam ties in to Aalyria 17:21 – 100gb/s through the atmosphere 19:42 – Brian's mandate as CTO when Aalyria forms 20:37 – State of Tightbeam at formation of Aalyria 22:17 – Why can't other companies do what Spacetime does yet? 26:05 – The significance of having different architectures with different source codes talk to each other without modification 28:21 – How Aalyria integrates a new customer's network 31:05 – What is a long distance for Tightbeam and customer reaction to demos 32:48 – Who has Aalyria surprised the most with their demos? 34:28 – What has prevented the government from making a network of networks? 39:14 – Why wouldn't a space version of the Tightbeam terminal not work? 42:01 – How Aalyria is thinking about customer adopting Tightbeam 45:15 – Aalyria in the defense industry 47:05 – Aalyria's commercial aspects 48:30 – Aalyria's latest investment round 51:39 – Next milestones 53:00 – What keeps Brian up at night? 54:00 – Longterm vision for Aalyria 56:16 – What does Brian do for fun? • Show notes • Aalyria's website — https://www.aalyria.com/ Mo's socials — https://x.com/itsmoislam Payload's socials — https://twitter.com/payloadspace / https://www.linkedin.com/company/payloadspace Ignition's socials — https://twitter.com/ignitionnuclear / https://www.linkedin.com/company/ignition-nuclear/ Tectonic's socials — https://twitter.com/tectonicdefense / https://www.linkedin.com/company/tectonicdefense/ Valley of Depth archive — Listen: https://pod.payloadspace.com/ • About us • Valley of Depth is a podcast about the technologies that matter — and the people building them. Brought to you by Arkaea Media, the team behind Payload (space), Ignition (nuclear energy), and Tectonic (defense tech), this show goes beyond headlines and hype. We talk to founders, investors, government officials, and military leaders shaping the future of national security and deep tech. From breakthrough science to strategic policy, we dive into the high-stakes decisions behind the world's hardest technologies. Payload: www.payloadspace.com Tectonic: www.tectonicdefense.com Ignition: www.ignition-news.com
In this episode of the Ardan Labs Podcast, Ale Kennedy talks with Jens Neuse, CEO and co-founder of WunderGraph, about his unconventional path into technology and entrepreneurship. After a life-altering accident ended his carpentry career, Jens taught himself to code during recovery and eventually built WunderGraph to solve modern API challenges.Jens shares the evolution of WunderGraph from an early-stage startup to a successful open-source platform, including pivotal moments like securing eBay as a customer. The conversation highlights the importance of resilience, community-driven development, and balancing startup life with family, offering insight into what it takes to build meaningful technology through adversity and persistence.00:00 Introduction and Current Life07:19 Dropping Out and Carpentry Career10:52 Life-Altering Accident and Recovery18:01 Learning to Walk and Finding Direction27:46 Discovering Coding and Technology31:17 Starting the Startup Journey33:07 Discovering the Power of APIs40:50 Building a Team and Leadership Growth48:17 Founding WunderGraph59:07 Pivoting to Open Source01:05:32 eBay Breakthrough and Validation01:10:08 Balancing Family and Startup LifeConnect with Jens: LinkedIn: https://www.linkedin.com/in/jens-neuseMentioned in this Episode:Wundergraph: https://wundergraph.comWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
How real-time security transforms ERP systems in a cloud-driven world, spotting threats instantly, leveraging AI for proactive defense, and closing common blind spots before breaches escalate. Curious about staying ahead of cyber risks?=====Mohammed Moidheen, SAP security architect at Infosys, unpacks why real-time monitoring is vital amid 2,200 daily cyber attacks costing trillions annually. He highlights blind spots like unmonitored access vulnerabilities, ignored audit logs, unsecured APIs, privileged accounts, insider threats, and poor event correlation in S/4HANA Cloud setups. AI evolves detection with predictive intelligence, automated responses, natural language queries, and cross-system pattern spotting, shifting from reactive to proactive security. Real-world cases show systems halting unusual data downloads and insider data exfiltration in minutes. Advice includes aligning with governance, prioritizing crown jewels, setting baselines, training teams, and correlating data. Infosys aids via assessments and foundational builds.Listen now and rethink what ERP can do for your organization!Download Episode TranscriptUseful Links: SAP Cloud ERPInfosys.comFollow Us on Social Media!SAP S/4HANA Cloud ERP: LinkedIn=====Guest: Mohammed Khan Moidheen, SAP Security Architect at Infosys ConsultingMohammed Khan Moidheen is a Senior SAP Security architect with over 12 years of experience securing and operating large scale SAP landscapes across global enterprises. His expertise spans SAP S/4HANA security, ERP platform services, DevSecOps enablement, and designing audit ready security architectures aligned with frameworks such as ISO 27001, NIST, and GDPR.Mohammed is CISSP and CISA certified and I excel at translating complex security requirements into actionable strategies that are practical , strategically aligned and strengthen organisational resilience.Host 1: Richard Howells, SAPRichard Howells has been working in the Supply Chain Management and Manufacturing space for over 30 years. He is responsible for driving the thought leadership and awareness of SAP's ERP, Finance, and Supply Chain solutions and is an active writer, podcaster, and thought leader on the topics of supply chain, Industry 4.0, digitization, and sustainability.Follow Richard Howell on LinkedIn and XHost 2: Oyku Ilgar, SAPOyku Ilgar is a marketer and thought leader specializing in SAP's digital supply chain and ERP solutions since 2017. As a marketer, blogger, and podcaster, she creates engaging content that highlights innovative SAP technologies and explores key topics including business trends, AI, Industry 4.0, and sustainability.She holds dual bachelor's degrees in Finance & Accounting and English Translation, along with a master's degree in Business Administration and Foreign Trade, specializing in marketing. With her background in digital transformation, Oyku communicates technology trends and industry insights to help professionals navigate the evolving business landscape.Oyku's LinkedIn and SAP Community=====Key Topics: real-time security, ERP monitoring, cloud threats, SAP S/4HANA, access management, audit logs, AI threat detection, insider threats, privileged accounts, predictive intelligence
Dylan and Max sit down with Aaron, Software Architect at Airplane Manager, to talk business aviation ops tech and where AI is headed. If you're running lean (two pilots, one tail, no dispatcher), this is the roadmap for reducing busywork without losing operational control. They dig into integrations, offline trip tools, and why "apps" might just become background APIs. Listen in and subscribe for more pilot-to-pilot ops talk. Check out the software Dylan and Max both use to run their departments: Airplane Manager Show Notes 0:00 Intro 2:01 Airplane Manager Overview 11:07 App, AI, and Security 21:08 Flight Operations Efficiency 36:18 Evolving Best Practices with Tech 49:39 Final Thoughts Our Sponsors Tim Pope, CFP® — Tim is both a CERTIFIED FINANCIAL PLANNER™ and a pilot. His practice specializes in aviation professionals and aviation 401k plans, helping clients pursue their financial goals by defining them, optimizing resources, and monitoring progress. Click here to learn more. Also check out The Pilot's Portfolio Podcast. Advanced Aircrew Academy — Enables flight operations to fulfill their training needs in the most efficient and affordable way—anywhere, at any time. They provide high-quality training for professional pilots, flight attendants, flight coordinators, maintenance, and line service teams, all delivered via a world-class online system. Click here to learn more. Raven Careers — Helping your career take flight. Raven Careers supports professional pilots with resume prep, interview strategy, and long-term career planning. Whether you're a CFI eyeing your first regional, a captain debating your upgrade path, or a legacy hopeful refining your application, their one-on-one coaching and insider knowledge give you a real advantage. Click here to learn more. The AirComp Calculator™ is business aviation's only online compensation analysis system. It can provide precise compensation ranges for 14 business aviation positions in six aircraft classes at over 50 locations throughout the United States in seconds. Click here to learn more. Vaerus Jet Sales — Vaerus means right, true, and real. Buy or sell an aircraft the right way, with a true partner to make your dream of flight real. Connect with Brooks at Vaerus Jet Sales or learn more about their DC-3 Referral Program. Harvey Watt — Offers the only true Loss of Medical License Insurance available to individuals and small groups. Because Harvey Watt manages most airlines' plans, they can assist you in identifying the right coverage to supplement your airline's plan. Many buy coverage to supplement the loss of retirement benefits while grounded. Click here to learn more. VSL ACE Guide — Your all-in-one pilot training resource. Includes the most up-to-date Airman Certification Standards (ACS) and Practical Test Standards (PTS) for Private, Instrument, Commercial, ATP, CFI, and CFII. 21.Five listeners get a discount on the guide—click here to learn more. ProPilotWorld.com — The premier information and networking resource for professional pilots. Click here to learn more. Feedback & Contact Have feedback, suggestions, or a great aviation story to share? Email us at info@21fivepodcast.com. Check out our Instagram feed @21FivePodcast for more great content (and our collection of aviation license plates). The statements made in this show are our own opinions and do not reflect, nor were they under any direction of any of our employers.
The host of episode 108 of Venture Everywhere is Harm-Julian Schumacher, co-founder and CEO of OneLot, a financing platform for used car dealers in the Philippines. He talks with Reto Bolliger, co-founder and CEO of Chaiz, an online marketplace for extended vehicle warranties. Reto shares how climbing Kilimanjaro led him to build a travel company, and how an investor in that business introduced him to the surprisingly profitable world of extended car warranties. He discusses how Chaiz challenges the industry consensus that warranties “must be sold” through aggressive tactics, instead building trust through transparency and offering consumers prices up to 40% cheaper than dealerships.In this episode, you will hear:Building the first online marketplace to compare and buy extended car warranties.Offering dealership products at 40% lower prices through digital channels.Replacing aggressive sales tactics with transparency and education.Leveraging AI for customer support and AI search optimization.Embedding warranty APIs for cross-selling through partner platforms.Learn more about Reto Bolliger | ChaizLinkedIn: https://www.linkedin.com/in/reto-bolligerWebsite: https://www.chaiz.comLearn more about Harm-Julian Schumacher | OneLotLinkedin: https://www.linkedin.com/in/harm-julian-schumacherWebsite: https://www.onelot.ph
Today, host Sandy Vance sits down with Jeff McCool, the AVP of Healthcare Conversational AI at Amelia. Join a discussion with SoundHound AI, the leader in conversational intelligence, to learn how AI agents are helping healthcare companies overcome challenges like improving patient care and streamlining operations. Hear how the SoundHound Amelia Platform lets you build AI agents that understand, reason, and act so you can create the most seamless conversational experience. In this episode, they talk about: The types of healthcare organizations Amelia partners with How Amelia's platform approach supports health systems in multiple ways beyond a single tool Working with clients to establish guardrails for safe and effective AI adoption How conversational AI is expected to evolve in the coming years Real-world implementation success stories and lessons learned What differentiates SoundHound AI's agents and the broader ecosystem created through partnerships Advice for healthcare leaders at provider and payer organizations navigating next steps with AI A Little About Jeff: Jeff McCool works at the intersection of healthcare and AI, helping organizations use conversational technology to solve real operational challenges. He is AVP of Healthcare Conversational AI at Amelia, where he partners with health systems to deploy AI-powered virtual agents that improve patient and employee experiences while reducing friction in everyday workflows. His focus is on practical AI adoption, what works in production, how teams implement it, and how to scale responsibly. Previously, Jeff held leadership roles at Ciox and Datavant Health, leading digital growth initiatives centered on interoperability, APIs, and healthcare data exchange. His background combines healthcare operations, technology, and go-to-market strategy. Jeff holds an MBA from UNC Kenan-Flagler Business School and a degree in Banking and Finance from the University of Georgia's Terry College of Business.
The voices telling you it won't work usually belong to people who never tried. Nobody gives you permission to take a chance. You just do it.Chris built a 50K MRR business without a formal education, a tech background, or a plan. As an actor, a car dealership paid him $400 to be in a commercial and he thought, "If I can pretend to do this, what happens if I just actually do it?"From there it was taking on teaching himself APIs, webhooks integrations, and enough failures to make most people quit. He's now responsible for 40% of some dealerships' bottom lines, working remotely from Ottawa, heading to Costa Rica.We talked about why people don't take that first step. Chris's take is it's mostly the room you're in. When you move somewhere nobody knows you, the risk calculus changes. The voices telling you you're going to look stupid usually belong to people who never left.We also got into social media, the throttled notification drip sequences designed to keep you coming back, the rage bait economy, the positive reinforcement loop that rewards the most outrageous behavior. His advice was simple: put your phone down and tackle your life goals head on.Chris also hosts Bad Hombres TV on YouTube.
Michael Truell, CEO of Cursor, sits down with Patrick Collison, CEO of Stripe and an investor in Anysphere, to talk about Collison's history with Smalltalk and Lisp, the MongoDB and Ruby decisions Stripe still lives with 15 years later, why he'd spend even more time on API design if he could do it over, and whether AI is actually showing up in economic productivity data. This episode originally aired on Cursor's podcast. Resources: Follow Patrick Collison on X: https://twitter.com/patrickc Follow Michael Truell on X: https://twitter.com/mntruell Follow Cursor: https://www.youtube.com/@cursor_ai Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Have you ever wondered why "compliance" still gets treated like a slow, spreadsheet-heavy chore, even though the rest of the business is moving at machine speed? In this episode of Tech Talks Daily, I sit down with Matt Hillary, Chief Information Security Officer at Drata, to talk about what actually changes when AI and automation land in the middle of governance, risk, and compliance. Matt brings a rare viewpoint because he lives this day-to-day as "customer zero," running Drata internally while also leading IT, security, GRC, and enterprise apps. We get practical fast. Matt shares how AI-assisted questionnaire workflows can turn a 120-question security assessment from a late-afternoon time sink into something you can complete with confidence in minutes, then still make it upstairs in time for dinner. He also explains how automation flips the audit dynamic by moving from random sampling to continuous, full-population checks, using APIs to validate evidence at scale, without hounding control owners unless something is actually wrong. We also talk about what security leadership really looks like when the stakes rise. Matt reflects on lessons from his time at AWS, why curiosity and adaptability matter when the "canvas" keeps changing, and how customer focus becomes the foundation of trust. That theme runs through the whole conversation, including the idea that the CISO role is steadily turning into a chief trust officer role, where integrity, transparency, and credibility under pressure matter as much as tooling. And because burnout is never far away in security, we dig into the human side too. Matt unpacks how automation can reduce cognitive load, but also warns about swapping one kind of pressure for another, especially when teams get trapped producing endless dashboards and vanity metrics instead of focusing on the few measures that actually reduce risk. To wrap things up, Matt leaves a song for the playlist, Illenium's "You're Alive," plus a book recommendation, "Lessons from the Front Lines, Insights from a Cybersecurity Career" by Asaf Karen, which he says stands out for how it treats the human side of security leadership. If you're thinking about modernizing compliance in 2026 without losing the human element, his parting principle is simple and powerful: be intentional, keep asking why, and spend your limited time on what truly matters. So where do you land on this shift toward continuous trust, do you see it becoming the default expectation for buyers and auditors, and what should leaders do now to make sure automation reduces pressure instead of quietly adding more? Share your thoughts with me, I'd love to hear how you're approaching it.
This week on Defender Fridays, Farshad Abasi, Founder and CEO of Forward Security and Eureka DevSecOps, discusses how AI can help us set a new standard in app and cloud security. Farshad brings over 27 years of industry experience to the forefront of cybersecurity innovation. His professional journey includes key technical roles at Intel and Motorola, evolving into senior security positions as the Principal Security Architect for HSBC Global, and Head of IT Security for the Canadian division. Farshad's commitment to the field extends to his role as an instructor at BCIT, where he imparts his wealth of knowledge to the next generation of cybersecurity experts. His diverse experience, which spans startups to large enterprises, informs his approach to delivering adaptive and reliable solutions.Engaged actively in the cybersecurity community through roles in BSides Vancouver/MARS, OWASP Vancouver/AppSec PNW, and as a CISSP designate, Farshad's vision and leadership continue to drive the industry forward. Under his guidance, Forward Security is setting new standards in application and cloud security. Learn more at https://www.eurekadevsecops.com/ and https://forwardsecurity.com/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie
There's a lethal trifecta of AI risks: access to private data, exposure to untrusted content, and external communication. In this conversation, Risky Business host Patrick Gray chats with Josh Devon, the co-founder of Sondera, about how to best address these risks. There is no magic solution to this problem. AI models mix code and data, are non-deterministic, and are crawling around all over your enterprise data and APIs as you read this. But in this sponsored interview, Josh outlines how we can start to wrap our hands around the problem. This episode is also available on Youtube. Show notes
Open banking in the United States has been on a long and winding road, and the journey is far from over. In this episode, I sit down with Steve Boms, Executive Director of FDATA North America, the trade association representing the fintech companies at the heart of the open banking ecosystem. Steve has been one of the most active voices in shaping U.S. open banking policy for over a decade, and he brings a uniquely informed perspective to where things stand today.We dig into the current state of the 1033 rule and what amendments are likely coming, FDATA's firm stance that banks should not be permitted to charge fees for consumer-directed data access, and the growing complexity created by a patchwork of state-level regulations on data privacy, AI, and fintech products. We close with a fascinating discussion on how agentic AI, with its need for clear consent frameworks, robust APIs, and defined liability rules, could become the next major catalyst that finally forces meaningful open banking progress in this country.In this podcast you will learn:The origin story of FDATA in the UK and how it came to the US.How Steve has been involved with CFPB and Section 1033 since 2015.Over the next 10+ years, how FDATA has been engaged in open banking policy.How open banking and open finance has evolved in the UK.Who their members are and what FDATA does for them.Where we are at today when it comes to the 1033 rule.The FDATA view on banks charging fees for access to their data.Why this is not really a bank versus fintech fight.Why it may be many years before we have a final rule for open banking.Why data access negotiations have been put on pause for now.What else Steve is working on beyond open banking.Why he is increasing concerned about the Balkanization of financial services regulation (see his recent Open Banker column).How they coordinate with the other fintech trade associations.How they think about the standardization of API and other data standards.Why Steve is optimistic about the future of open banking in the U.S.Why AI agents could be a catalyzing force for clear open banking rules.Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes
We're here for a CHIPS Act megapod, in person with Mike Schmidt and Todd Fisher, the director and founding CIO of the CHIPS Program Office, respectively. We discuss… The mechanisms behind the success of the CHIPS Act, What CHIPS can teach us about other industrial policy challenges, like APIs and rare earths, What it takes to build a successful industrial policy implementation team, How the fear of “another Solyndra” is holding back US industrial policy, Chris Miller's recent interest in revitalizing America's chemical industry. This post is a collaboration with the Factory Settings Substack: https://www.factorysettings.org/. Subscribe for more insights from former CHIPS Program Office leaders! Suno song link: https://suno.com/s/wwVYK10LfrAD5zK2 Learn more about your ad choices. Visit megaphone.fm/adchoices
Scott and Wes unpack WebMCP, a new standard that lets AI interact with websites through structured tools instead of slow, bot-style clicking. They demo it, debate imperative vs declarative APIs, and share their hottest take: this might be the web's real AI moment. Show Notes 00:00 Welcome to Syntax! 00:16 Introduction to WebMCP 01:07 Understanding WebMCP Functionality. 03:06 Interacting with AI through WebMCP. 06:49 WebMCP browser integration. 08:25 Brought to you by Sentry.io. 08:49 Benefits of WebMCP. 11:51 Token efficiency. 13:02 My biggest questions. 14:13 My take on this tech. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
In this episode, I'm joined by Bill Briggs, CTO at Deloitte, for a straight-talking conversation about why so many organizations get stuck in what he calls "pilot purgatory," and what it takes to move from impressive demos to measurable outcomes. Bill has spent nearly three decades helping leaders translate the "what" of new technology into the "so what," and the "now what," and he brings that lens to everything from GenAI to agentic systems, core modernization, and the messy reality of technical debt. We start with a moment of real-world context, Bill calling in from San Francisco with Super Bowl week chaos nearby, and the funny way Waymo selfies quickly turn into "oh, another Waymo" once the novelty fades. That same pattern shows up in enterprise tech, where shiny tools can grab attention fast, while the harder work, data foundations, APIs, governance, and process redesign, gets pushed to the side. Bill breaks down why layering AI on top of old workflows can backfire, including the idea that you can "weaponize inefficiency" and end up paying for it twice, once in complexity and again in compute costs. From there, we get into his "innovation flywheel" view, where progress depends on getting AI into the hands of everyday teams, building trust beyond the C-suite, and embedding guardrails into engineering pipelines so safety and discipline do not rely on wishful thinking. We also dig into technical debt with a framing I suspect will stick with a lot of listeners. Bill explains three types, malfeasance, misfeasance, and non-feasance, and why most debt comes from understandable trade-offs, not bad intent. It leads into a practical discussion on how to prioritize modernization without falling for simplistic "cloud good, mainframe bad" narratives. We finish with a myth-busting riff on infrastructure choices, a quick look at what he sees coming next in physical AI and robotics, and a human ending that somehow lands on Beach Boys songs and pinball machines, because tech leadership is still leadership, and leaders are still people. So after hearing Bill's take, where do you think your organization is right now, measurable outcomes, success theater, or somewhere in between, and what would you change first, and please share your thoughts? Useful Links Connect With Bill Briggs Deloitte Tech Trends 2026 report Deloitte The State of AI in the Enterprise report
Justin Moon leads the open source ai initiative at the Human Rights Foundation.Justin on Nostr: https://primal.net/justinmoonHuman Rights Foundation: https://hrf.org/program/ai-for-individual-rights/Easy Open Claw Deployment: https://clawi.ai/EPISODE: 191BLOCK: 936962PRICE: 1473 sats per dollar(00:01:35) Justin Moon and early show memories(00:03:52) OpenClaw(00:04:16) Agents change how we use computers(00:07:07) OpenClaws light bulb moment(00:09:25) Agents as UX glue for Freedom Tech(00:10:00) HRF AI work, self-hosting breakthrough, and running your own stack(00:12:50) AI simplifies hard Bitcoin UX: coin control, backups, photos(00:14:22) OpenClaw + OpenAI: does it matter?(00:16:01) AI leverage for builders: open protocols win(00:19:22) Positive feedback loop: agents and open protocols(00:20:14) Costs vs privacy: local models, token spend, and KYC walls(00:23:15) Local hardware economics and historical parallels(00:27:20) Will capability gaps narrow? Mobile and on-device futures(00:29:56) Cutting-edge vs private setups; data lock-in and training moats(00:31:53) Competition, regulation risks, and hidden capabilities(00:34:05) Chinas open models: incentives, biases, and global adoption(00:38:56) American and European open models; Big Tech dynamics(00:40:56) Apple, hardware positioning, and agent UX form factors(00:42:48) Googles advantage: data, integration, and vertical stack(00:44:32) Acceleration ahead: productivity leaps and societal shifts(00:45:21) Jobs, layoffs, and disruptive labor realignment(00:47:55) From global commons to gated neighborhoods: bots and slop(00:50:21) Nostr as local internet: webs of trust and bot filters(00:51:57) Cancel culture contagion and shrinking public square(00:54:59) Demographic decentralization and small-town resilience(00:55:00) Lean platforms: X/Twitter staffing as canary(00:56:59) Universal high income: incentives and realism(00:58:48) Prepare your household: seize tools, avoid flat feet(01:01:01) Marmot DMs over Nostr: agents need open messaging(01:03:11) Building Pika: encrypted chat and voice over Marmot(01:07:00) Generative UI and real-time media over Nostr(01:10:07) APIs, bans, and why open protocols become the convenient path(01:14:02) Future gates: Bitcoin paywalls, webs of trust, or dystopian KYC(01:17:19) Getting started: try OpenClaw safely and learn by play(01:22:14) Agents, Cashu, and Lightning UX: bots as channel managers(01:25:10) Federations run by machines? Enclaves and AI guardians(01:27:50) Maple, Vora, and bringing self-sovereign AI to mainstream(01:29:00) Security kudos and caveats; Coinbase and cold storage(01:30:02) Justins education plan and upcoming streamsmore info on the show: https://citadeldispatch.comlearn more about me: https://odell.xyz
Máscaras, disfraces, confeti y costosas carrozas para la política mundialComo en las saturnales y lupercales, como en las fiestas del toro Apis, desde el Mardi Grass de Nueva Orleans a Venecia con sus máscaras, scolas do samba, comparsas, diabladas, repique de los tambres del candome, murgas y todo lo que podemos celebrar en esta triste realidad de discriminación, persecución, violencia y ambiciónECDQEMSD podcast episodio 6241 Carnaval DescarnadoConducen: El Pirata y El Sr. Lagartija https://canaltrans.comNoticias Del Mundo: Navalni murió envenenado - El pentágono usó I.A. para capturar a Maduro - Ju-ae La hija de Kim Jong-un - Una popularidad sin medida - Acomodando la agenda - Therian fuera de control - La grasa de las Capitales - El monito de la regidoraHistorias Desintegradas: La maquina sensual - Demasiada presión - En las cosas del amor - Regresar el producto - En la fría Punta Arenas - Lo que no quería escuchar - Tampoco sabe bailar? - Kempes en el 78 - Error en el registro - Cigarrillos larguísimos - Nombre original - Pleno carnaval - Las ladronas - Almendras empanizadas - El almendro - Amores imposibles y más...En Caso De Que El Mundo Se Desintegre - Podcast no tiene publicidad, sponsors ni organizaciones que aporten para mantenerlo al aire. Solo el sistema cooperativo de los que aportan a través de las suscripciones hacen posible que todo esto siga siendo una realidad. Gracias Dragones Dorados!!NO AI: ECDQEMSD Podcast no utiliza ninguna inteligencia artificial de manera directa para su realización. Diseño, guionado, música, edición y voces son de nuestra completa intervención humana.