Podcast appearances and mentions of Andrew Ng

American artificial intelligence researcher

  • 204PODCASTS
  • 272EPISODES
  • 39mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 8, 2025LATEST
Andrew Ng

POPULARITY

20172018201920202021202220232024


Best podcasts about Andrew Ng

Latest podcast episodes about Andrew Ng

Design of AI: The AI podcast for product teams
AI's Predictive Powers will Change how we Live & Work

Design of AI: The AI podcast for product teams

Play Episode Listen Later Apr 8, 2025 49:40


As much as image generation is fun, the power of GenAI is prediction. The technology operates very similarly to people you might meet: * Some people have studied and are experts in a single topic for a decade. They're experts in that topic and can easily infer, correct, and complete tasks. They're unreliable for everything else.* Some people are generally knowledgeable and have a good understanding of many topics. They aren't experts but can reliably assist you in many ways. But they'll also be wrong sometimes.OpenAI, Anthropic, etc.— are highly knowledgeable in almost every topic. That's the result of being trained on all accessible information online, data they've licensed, plus data they've allegedly stolen. AI products built on these frontier models are immediately powerful for completing any task. But if you build a point solution on proprietary data explicitly trained on a narrow topic, it can achieve an expert level. That was the focus of our conversation with Tyler Hochman, the Founder and CEO of FORE Enterprise. We discussed unlocking AI's predictive power by focusing on expensive and repeating problems. How any business or founder can leverage and/or specialized data sets to train AI models to deliver powerful prediction capabilities.Listen on Spotify | Listen on Apple Podcasts | Watch on YouTubeHe's built AI-powered software to predict when employees may leave their jobs, offer fashion advice, and help professional sports teams improve performance. This video explains how to train your model using Figma files.This conversation highlights how important your first party will become. This data includes more than just your customer data; it should include documenting workflows, quantifying initiatives, and developing a matrix of your offerings/capabilities. Anything repeatable must be quantified as a learning tool.Example of a data collection strategy for AI trainingWhen OpenAI launched a new image generation feature in ChatGPT, everyone jumped on it. AI-generated images infested our feeds in the Studio Ghibli style. These images sparked a lot of worthy debate about copyright infringement, which added to the ethical concerns about how OpenAI trains its model. A recent study highlighted evidence that ChatGPT is trained on copyrighted works.Given that AI models are running out of data to consume, they need to find clever ways to access a new data set. Enter ChatGPT's image generation tool and Ghibli craze. Millions of people have been feeding their photos into the model, giving it access to an entire universe of new training data to improve the quality of its image generation capabilities. Lesson: Collecting user-generated content can provide your custom model with access to training data that was never possible before. This holds true whether your product is a document scanner, video generator, accounting software, run tracking app, or anything else. As we move into the next phase of AI model evolution, the data you have access to might become your best competitive moat. Thus, businesses with access to ethically sourced content from their communities and customers have an advantage.Thanks for reading Design of AI: Strategies for Product Teams & Agencies! Future of AI-powered workforcesYesterday, LinkedIn exploded with screenshots of an internal memo sent by Shopify CEO Tobi Lutke to teams. It marks the most public evidence that AI is moving from a toy we experiment with to a critical skill that you'll be scored in your next performance review.The data backs up that AI adoption is surging within workplaces. A study by the Wharton School at the University of Pennsylvania collected data on which use cases AI is most used for. The report highlighted use cases that every business and employees rely on daily or weekly. Not so long ago, employees secretly used AI at work. The year-over-year data indicate that AI products are becoming adopted at an organizational level.AI's impact on our lives will be dramatic & potentially dystopianStanford's 2025 AI Index Report offers metrics demonstrating the significant leaps forward AI has made across performance and usage metrics. The technology has already surpassed human baseline performance on many measures. And the technology's predictive capabilities are showcased in how effective LLM's performance in clinical diagnosis. It points to a future where every one of us —physicians, educators, factory workers, and beyond— will rely on AI to make more informed decisions. MUST READ: Futures essay about future of superintelligenceThe AI 2027 essay, written by researchers and journalists, examines the question of what happens on a global level as we approach AI superintelligence. A long and worthy read, it illustrates that we are much closer to superintelligence than the public may believe and that the snowball effects of achieving it are massive. They predict dystopian outcomes unless the world unifies around regulations and safety guidelines.If their predictions are true, we're being distracted by the table stakes of Ghibli image generation and coding tasks. This technology will utterly transform our personal and professional lives. It will give governments immense power over one another. And it will open Pandora's box of dreams and nightmares.If you need to chat through the implications of these predictions, email us info@designof.ai. We'll definitely discuss this in detail in our upcoming episode with the authors of the AI Con book and hosts of the Mystery AI Hype Theater 3000 podcast.Podcast recommendation: The Most Interesting Thing in A.I.The Atlantic's Nicholos Thompson started an amazing podcast showcasing strategic topics about AI.Listen to the Andrew Ng episode. It dives into important topics about the future of frontier models and the implications of running out of training data (if it happens). Thanks for reading Design of AI: Strategies for Product Teams & Agencies! This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

World vs Virus
Beyond the hype, how industries are deploying AI at the heart of their operations

World vs Virus

Play Episode Listen Later Apr 3, 2025 38:43


There was the hype, then the testing, now companies are deploying artificial intelligence at the heart of their operations. We ask one of the world's most prominent AI scientists for his advice for companies, and hear how Siemens is creating the 'brains' to run the factories of the future. Guests: Andrew Ng, managing general partner of AI FUNDS and founder of DeepLearning.AI Cedrik Neike, CEO Digital Industries, Siemens Cathy Li, Head, AI, Data and Metaverse, World Economic Forum Kiva Allgood, Head, Centre for Advanced Manufacturing & Supply Chains, World Economic Forum Links: AI in Action: Beyond Experimentation to Transform Industry: https://reports.weforum.org/docs/WEF_AI_in_Action_Beyond_Experimentation_to_Transform_Industry_2025.pdf Frontier Technologies in Industrial Operations: The Rise of Artificial Intelligence Agents: https://reports.weforum.org/docs/WEF_Frontier_Technologies_in_Industrial_Operations_2025.pdf Centre for the Fourth Industrial Revolution: https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home Centre for Advanced Manufacturing and Supply Chains: https://centres.weforum.org/centre-for-advanced-manufacturing-and-supply-chains/home Related podcasts: What's next for generative AI? Three pioneers on their Eureka moments AI vs Art: Will AI rip the soul out of music, movies and art, or help express our humanity? Check out all our podcasts on wef.ch/podcasts:  YouTube: - https://www.youtube.com/@wef/podcasts Radio Davos - subscribe: https://pod.link/1504682164 Meet the Leader - subscribe: https://pod.link/1534915560 Agenda Dialogues - subscribe: https://pod.link/1574956552 Join the World Economic Forum Podcast Club: https://www.facebook.com/groups/wefpodcastclub

How I Raised It - The podcast where we interview startup founders who raised capital.
Ep. 297 How I Raised It with Arto Yeritsyan of Podcastle

How I Raised It - The podcast where we interview startup founders who raised capital.

Play Episode Listen Later Mar 20, 2025 6:06


Produced by Foundersuite (for startups: www.foundersuite.com) and Fundingstack (for VCs: www.fundingstack.com), "How I Raised It" goes behind the scenes with startup founders and investors who have raised capital. This episode is with with Arto Yeritsyan of Podcastle.ai, a startup using AI to help podcasters create professional-quality audio & video content. Learn more at podcastle.ai In this episode, Arto shares his journey building and fundraising in Armenia, how he used Chat GPT to research investors and find the right person at each VC firm, how he got a 90% conversion rate from sending a deck to securing a pitch meeting, how he built a relationship with Point Nine Capital even after they said "no," how he used early investors as part of his deal team, and more. Podcastle most recently raised $13.5m in a Series A funding round led by Mosaic Ventures, with participation from existing Podcastle investors RTP Global, Point Nine, Sierra Ventures, and Andrew Ng's AI Fund. The CEOs of Squarespace and Moonbug Media also participated in the round. How I Raised It is produced by Foundersuite, makers of software to raise capital and manage investor relations. Foundersuite's customers have raised over $21 Billion since 2016. If you are a startup, create a free account at www.foundersuite.com. If you are a VC, venture studio or investment banker, check out our new platform, www.fundingstack.com

No Hay Derecho
Andrew Ng en No Hay Derecho con Glatzer Tuesta [20-03-2025]

No Hay Derecho

Play Episode Listen Later Mar 20, 2025 10:34


Andrew Ng, consejero político de asuntos públicos de la embajada de Canadá en Perú, conversa con Glatzer Tuesta en el Bloque Cultural de No Hay Derecho de Ideeleradio. No Hay Derecho en vivo de lunes a viernes, desde las 7 a. m., por el YouTube y Facebook de Ideeleradio.

canad andrew ng no hay derecho
Startupeable
Cómo Validar Startups de IA, Diferenciar Aplicaciones Superficiales & Crear Ventajas Competitivas en Latinoamérica | Carlos Alzate, AI Fund

Startupeable

Play Episode Listen Later Mar 12, 2025 52:22


Hoy conversé con Carlos Alzate, CTO del AI Fund, un venture studio fundado por Andrew Ng que construye e invierte en startups de inteligencia artificial.Carlos tiene +20 años de experiencia, un Doctorado en IA y anteriormente trabajó en IBM Research donde participó en algunos proyectos pioneros en IA como Project Debater y Watson.-Por favor ayúdame dejando una reseña en Spotify o Apple Podcasts: https://ratethispodcast.com/startupeable-AI Fund ha levantado $175M y a la fecha han invertido en +30 startups incluidas: 10Web, Baseten y Podcastle.Hoy Carlos y yo hablamos sobre:Por qué la IA debe ser el "último recurso" para resolver problemasCómo evaluar si un "wrapper de ChatGPT" realmente aporta valorLa importancia de los datos propietarios como ventaja competitivaEl concepto de "human in the loop" para sistemas de IA efectivosLos desafíos únicos para la adopción de IA en LatinoaméricaNotas del episodio: https://startupeable.com/ai-fundPara más contenido síguenos en:YouTube  | Sitio Web -Distribuido por Genuina Media

ABOUT THAT WALLET
286: [Sumedha Rai] Ai Review episode

ABOUT THAT WALLET

Play Episode Listen Later Feb 17, 2025 22:27 Transcription Available


What are your thoughts on these Ai Reviews leave a comment!Get ready to explore the mind-blowing world of artificial intelligence! On this episode of About That Wallet, host Anthony Weaver chats with the brilliant Sumedha Rai's journey is nothing short of incredible, from her early days at the Central Bank of India to mastering the complexities of AI at NYU. She's not just talking about AI; she's building it, developing groundbreaking solutions that are transforming industries from finance to healthcare. Join us as we delve into Said's fascinating story and uncover the secrets of AI innovation.Listeners will gain insight into how AI is reshaping the workforce and the ethical considerations that come with it. Said emphasizes the importance of human oversight in AI applications, especially in critical areas such as medical decision-making and loan approvals, where biases in data can have serious consequences. She advocates for a future where AI acts as a partner to humans, enhancing our capabilities rather than replacing them.Throughout the episode, Sumedha shares her journey and offers practical advice for those looking to enter the field of AI. She highlights the significance of a solid foundation in mathematics and programming, encouraging listeners to embrace lifelong learning and curiosity. With resources like Andrew Ng's courses and the importance of engaging in meaningful conversations at conferences, Sumedha inspires everyone to become active participants in the evolving AI landscape.As the discussion unfolds, the conversation turns to the potential downsides of AI, including issues of copyright and data privacy. Sumedha stresses the need for responsible AI development that prioritizes fairness and transparency, ensuring that technology serves to uplift communities rather than exacerbate inequalities.In closing, Sumedha reflects on her commitment to using AI for social good, advocating for more women in tech and the importance of diverse perspectives in shaping the future of AI. This episode serves as a powerful reminder that the future of AI is not just about technology—it's about the values we instill in it and the impact it can have on our society.

The Secret Sauce
TSS829 สรุปเทรนด์โลกจากดาวอส 2025 Trump คนเดียวเสียวทั้งโลก

The Secret Sauce

Play Episode Listen Later Feb 7, 2025 49:27


เปิดพอดแคสต์เอพิโสดนี้ใน YouTube เพื่อประสบการณ์การรับชมที่ดีที่สุด World Economic Forum (WEF) คือการประชุมระดับโลกที่รวบรวมผู้นำจากภาคเอกชน ภาครัฐ และผู้นำรุ่นใหม่ เพื่อร่วมกันกำหนดทิศทางอนาคตของโลก ปีนี้ความพิเศษอยู่ที่ Donald Trump ประธานาธิบดีสหรัฐฯ คนใหม่ ที่สร้างแรงสั่นสะเทือนครั้งใหญ่ในการประชุม ด้วยการประกาศจุดยืนที่ทำให้ผู้นำประเทศต่างๆ ต้องทบทวนยุทธศาสตร์และไพ่ทางการเมืองกันใหม่ทั้งหมด ขณะเดียวกัน AI ไม่ใช่แค่แนวคิดอีกต่อไป Andrew Ng ชี้ให้เห็นถึงการใช้งานจริงที่กำลังเปลี่ยนโฉมโลกธุรกิจ ส่วนประเด็นสิ่งแวดล้อมและ Climate Finance ก็เป็นที่ถกเถียงอย่างเข้มข้น โดย Ray Dalio และ Dilhan Pillay Sandrasegara ซีอีโอของ Temasek ร่วมกันแสวงหาโซลูชันใหม่ๆ เพื่ออนาคตที่ยั่งยืน ทั้งหมดสะท้อนให้เห็นว่าโลกกำลังเผชิญกับแรงกระเพื่อมครั้งใหญ่ และทุกมิติของเศรษฐกิจ การเมือง และเทคโนโลยี เชื่อมโยงกันอย่างเลี่ยงไม่ได้ The Secret Sauce เอพิโสดนี้ เคน นครินทร์ จะพาคุณเจาะลึกจากเมืองดาวอส ประเทศสวิตเซอร์แลนด์ สรุปคีย์อินไซต์ที่ผู้นำทั่วโลกเห็นร่วมกัน พร้อมเผยทิศทางที่กำลังเปลี่ยนไป เพื่อให้คุณเตรียมตัวคว้าโอกาสและรับมือกับความท้าทายในปี 2025

David Bombal
#490: How To Learn AI in 2025 (If I Started Over)

David Bombal

Play Episode Listen Later Jan 20, 2025 46:27


Big thanks to Brilliant for sponsoring this video! To try everything Brilliant has to offer for free for a full 30 days and 20% discount visit: https://Brilliant.org/DavidBombal // Mike SOCIAL // X: / _mikepound Website: https://www.nottingham.ac.uk/research... // YouTube video reference // Teach your AI with Dr Mike Pound (Computerphile): • Train your AI with Dr Mike Pound (Com... Has Generative AI Already Peaked? - Computerphile: • Has Generative AI Already Peaked? - C... // Courses Reference // Deep Learning: https://www.coursera.org/specializati... AI For Everyone by Andrew Ng: https://www.coursera.org/learn/ai-for... Pytorch Tutorials: https://pytorch.org/tutorials/ Pytorch Github: https://github.com/pytorch/pytorch Pytorch Tensors: https://pytorch.org/tutorials/beginne... https://pytorch.org/tutorials/beginne... https://pytorch.org/tutorials/beginne... Python for Everyone: https://www.py4e.com/ // BOOK // Deep learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville: https://amzn.to/3vmu4LP // PyTorch // Github: https://github.com/pytorch Website: https://pytorch.org/ Documentation: / pytorch // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 0:43 - Introduction 01:04 - State of AI in 2025 02:10 - AGI Hype: Realistic Expectations 03:15 - Sponsored Section 04:30 - Is AI Plateauing or Advancing? 06:26 - Overhype in AI Features Across Industries 08:01 - Is It Too Late to Start in AI? 09:16 - Where to Start in 2025 10:20 - Recommended Courses and Progression Paths 13:26 - Should I Go to School for AI? 14:18 - Learning AI Independently with Resources Online 17:24 - Machine Learning Progression 19:09 - What is a Notebook? 20:10 - Is AI the Top Skill to Learn in 2025? 23:49 - Other Niches and Fields 25:05 - Cyber Using AI 26:31 - AI on Different Platforms 27:13 - AI isn't Needed Everywhere 29:57 - Leveraging AI 30:35 - AI as a Productivity Tool 31:55 - Retrieval Augmented Generation 33:28 - Concerns About Privacy with AI 36:01 - The Difference Between GPU's, CPU's, NPU's etc. 37:30 - The Release of Sora38:56 - Will AI Take Our Job? 41:00 - Nvidia Says We Don't Need Developers 43:47 - Devin Announcement 44:59 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only.

Don't Stop Us Now! Podcast
Working in the Future - Helen Mayhew

Don't Stop Us Now! Podcast

Play Episode Listen Later Dec 8, 2024 34:35


This week's episode of Don't Stop Us Now AI edition is with a fascinating guest who has a box seat in understanding what's going on with AI in businesses around the world. Helen Mayhew is a McKinsey & Company Partner and one of the leaders of its AI division, QuantumBlack. A Cambridge graduate, Helen has deep data analytics and AI expertise. Her day job is to guide leading organisations on their advanced analytics journeys and AI innovation and implementation. In this episode, Helen covers everything from the broad spectrum of initiatives and use cases different businesses are and will be trying, to detailing how radically different our roles are likely to be in future. Helpfully, she shares some of the key skills we'll all need to remain valuable and she reveals her belief that almost every workday process and every person's day will be reimagined and done differently thanks to AI.Helen also shares some incredible research predictions about the future in case you were in any doubt about AI's coming impact on us all. For example, between 30-40% of all tasks done at work today won't need to exist in the future. Yes you read that right, 30 to 40% of what we humans do today will be replaced by AI according to this research! Helen is really good at explaining things very clearly and bringing a variety of AI use cases to life. She also shares some of her favourite AI learning resources which you can see in the links below. This is an unmissable episode, so learn what's coming your way with the ever curious and super smart Helen Mayhew. Useful LinksHelen Mayhew LinkedInQuantumBlack websiteMckinsey websiteChat GPT GeminiStable Diffusion Microsoft AI Learning HubFast AI / AI for Everyone courseDeepLearning founder and Coursera co founder: Andrew Ng courses Practical AI podcastLex Fridman podcast Hosted on Acast. See acast.com/privacy for more information.

Ingenios@s de Sistemas
Episodio 353 - Pensamiento Iterativo y Creatividad

Ingenios@s de Sistemas

Play Episode Listen Later Dec 8, 2024 35:36


En el episodio de hoy exploramos dos pilares clave en el desarrollo de software: el pensamiento iterativo y la creatividad en programación. Descubriremos cómo estos enfoques no solo nos ayudan a mejorar constantemente nuestras aplicaciones, sino también a crear soluciones innovadoras que realmente destacan. Prepárate para un recorrido lleno de estrategias prácticas y consejos para transformar tus proyectos en algo único y funcional. Noticias: Elon Musk busca bloquear la transición con fines de lucro de OpenAI DeepMind propone el aprendizaje 'socrático' para el auto-mejoramiento de la IA "World Labs revela mundo generado por IA en 3D explorables" "Hume lanza nueva herramienta de personalización de voz basada en IA" Amazon presenta la nueva familia de modelos de IA Nova Tencent presenta el potente modelo de IA de generación de video de código abierto Sam Altman de OpenAI anuncia el evento "12 Days of OpenAI" y comparte nuevas perspectivas sobre la IA Genie 2 de DeepMind convierte imágenes en mundos jugables OpenAI presenta el modelo o1 completo y el nuevo modo Pro Clone Robotics presenta un humanoide realista con órganos sintéticos Herramientas: Pine Reduce tus facturas, cancela suscripciones y resuelve problemas de atención al cliente.LINK Cogent Tutor personal para el estudio.LINK AutoFlow Studio Simplifique las pruebas de extremo a extremosin necesidad de código.LINK Cades Ssimplifica el desarrollo de aplicaciones móviles, desde la planificación hasta la publicación.LINK Sparkbase Agente de ventas que combina datos B2B con señales web en tiempo real.LINK TwinMind Barra lateral de IA que escucha, ve pestañas y ayuda de manera proactiva a los usuarios.LINK Socap AI Copiloto de redes de IA para emprendedores.LINK Toolhouse Infraestructura en la nube para equipar LLMs con acciones y conocimientos con solo tres líneas de código.LINK Lune AI Mercado de LLMs expertos individuales creados en temas técnicos.LINK AgentSpace Cree sitios web y aplicaciones impulsados por IA .LINK Nfig AI API para que los agentes de IA naveguen, hagan clic y realicen tareas en la webLINK Faang Cree un entrevistador de IA personalizado.LINK KushoAI Agente de IA para generar pruebas de API en Google SheetsLINK Agora Motor de búsqueda de IA para productos de comercio electrónico .LINK Magic Roll Crea cortos virales en un clic .LINK OfferGenie Asistente profesional para destacar en cada entrevistaLINK Foundry Construye, evalúa y mejora agentes de IA que pueden automatizar tu negocioLINK Replicate consistent-character Crea imágenes de cualquier personaje dado en diferentes poses.LINK aisuite Un paquete Python de código abierto creado por Andrew Ng que facilita a los desarrolladores utilizar LLMs .LINK Muku AI Agencia de influenciadores de IA.LINK DataFuel Convierte sitios web en datos listos para LLM .LINK Elastyc Combina talento con puestos de trabajo en segundos .LINK Kroto Grabe y traduzca guías de video en más de 60 idiomas con IA.LINK Voiser AI Transcribe, resume y traduce videos y grabacionesLINK Boost.Space 4.0 Compra y vende flujos de trabajo .LINK AgentPlace Crea sitios web y aplicaciones impulsadas por IA.LINK Supabase Un asistente global de IA con capacidades de desarrollo .LINK Realtime AI Mantén a los usuarios actualizados con el progreso de las tareas en tiempo real.LINK Roster Plataforma de contratación para creadores de contenido.LINK Hypelist Descubre recomendaciones personalizadas .LINK Coval Construye agentes de voz y chat confiables.LINK Pollo AI Crea videos a partir de indicaciones de texto.LINK Plot Desbloquea información detallada de consumidoresde videos de redes socialesLINK Focu Transforma tu relación con el trabajo a través de orientación impulsada por IA.LINK SDRx IA que crea listas específicas, realiza investigaciones de cuentas, crea correos electrónicos personalizadosLINK Athina Una plataforma de desarrollo de IA.LINK Apúntate a la academia Canal de telegram  y  Canal de Youtube Pregunta por Whatsapp +34 620 240 234

Ingenios@s de Sistemas
Episodio 352 - Depuración de errores

Ingenios@s de Sistemas

Play Episode Listen Later Dec 1, 2024 44:41


En el episodio de hoy nos adentramos en el apasionante y a veces frustrante mundo de la depuración de errores en programación, una habilidad esencial para todo desarrollador o aspirante a Software Composer. Exploraremos las mejores prácticas, herramientas y estrategias para identificar y resolver problemas en el código de forma eficiente. Noticias: Un pequeño robot AI protagoniza una rebelión en una sala de exposición Doctolib lanza una solución de inteligencia artificial para consultas médicas en tiempo real Runway presenta su potente modelo de generación de imágenes 'Frames' Anthropic lanza un sistema universal de conexión de inteligencia artificial Filtrado el modelo de vídeo Sora de OpenAI Ex líderes de Android lanzan una startup de sistema operativo para agentes de inteligencia artificial Alibaba reta a o1 con un modelo de razonamiento de inteligencia artificial de código abierto IA supera a expertos en la predicción de resultados científicos Amazon desarrolla modelo de IA llamado Olympus, enfocado en análisis de vídeo avanzado El robot optimus de Tesla recibe una importante actualización en su mano Herramientas: ChatROI Planifica, lanza y optimiza anuncios como un profesional con la automatización de campañas impulsadas por IA.⁠LINK⁠ Pine Reduce tus facturas, cancela suscripciones y resuelve problemas de atención al cliente con IA.⁠LINK⁠ Cogent Tutor personal con herramientas impulsadas por IA para el estudio.⁠LINK⁠ AutoFlow Studio Simplifique las pruebas de extremo a extremo con IA impulsada por QA sin necesidad de código.⁠LINK⁠ Cades Plataforma impulsada por IA que simplifica el desarrollo de aplicaciones móviles.⁠LINK⁠ Sparkbase Agente de ventas de IA que combina datos B2B con señales web en tiempo real para llamadas de ventas automaticas.⁠LINK⁠ TwinMind Barra lateral de IA que escucha, ve pestañas y ayuda de manera proactiva a los usuarios.⁠LINK⁠ Socap AI Copiloto de redes de IA para emprendedores.⁠LINK⁠ Toolhouse Infraestructura en la nube para equipar LLMs con acciones y conocimientos con solo tres líneas de código.⁠LINK⁠ Lune AI Mercado impulsado por la comunidad de LLMs expertos individuales creados en temas técnicos .⁠LINK⁠ AgentSpace Cree sitios web y aplicaciones impulsados por IA con simples instrucciones de texto⁠LINK⁠ Nfig AI API para que los agentes de IA naveguen, hagan clic y realicen tareas en la web⁠LINK⁠ Faang Cree un entrevistador de IA personalizado que se adapte a su estilo y necesidades únicas⁠LINK⁠ KushoAI Agente de IA para generar pruebas de API directamente en Google Sheets⁠LINK⁠ Agora Motor de búsqueda de IA para productos de comercio electrónico con compra rápida y fácil⁠LINK⁠ Magic Roll Crea cortos virales en un clic con imágenes de soporte, gráficos en movimiento y subtítulos potenciados por IA⁠LINK⁠ OfferGenie Asistente profesional para la carrera impulsado por IA con orientación en tiempo real para destacar en cada entrevista⁠LINK⁠ Runway Frames Un nuevo modelo base para la generación de imágenes con una precisión de estilo y construcción visual⁠LINK⁠ Foundry Construye, evalúa y mejora agentes de IA que pueden automatizar partes clave de tu negocio⁠LINK⁠ Llms.txt Generator Genera un archivo llms.txt para tu sitio web para proporcionar información que ayude a los LLMs a utilizar tu sitio web.⁠LINK⁠ Hume + Anthropic Computer Use Permite a los desarrolladores crear aplicaciones para controlar una computadora solo con la voz.⁠LINK⁠ snappy retro Crea un tablero retro en segundos, comparte el URL y colabora en tiempo real.⁠LINK⁠ ElevenLabs GenFM Genera podcast personales a partir de PDFs, artículos, eBooks, enlaces o texto en 32 idiomas.⁠LINK⁠ Replicate consistent-character Crea imágenes de cualquier personaje dado en diferentes poses.⁠LINK⁠ aisuite Un paquete Python de código abierto creado por Andrew Ng que facilita a los desarrolladores utilizar LLMs de varios proveedores.⁠LINK⁠ ⁠Apúntate a la academia⁠ ⁠Canal de telegram⁠  y  ⁠Canal de Youtube⁠ Pregunta por Whatsapp +34 620 240 234

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's

The CMO Podcast
Jim Lecinski (Northwestern Kellogg) | Class is in Session...Be a Student of AI

The CMO Podcast

Play Episode Listen Later Oct 9, 2024 58:38


This week's guest on The CMO Podcast is one of the foremost experts in the world of Artificial Intelligence and marketing, Professor Jim Lecinski. Jim is the Clinical Associate Professor of Marketing at Northwestern Kellogg. We use the term “Renaissance man or woman” too loosely these days, but in this case it's an appropriate moniker. Consider these highlights from Professor Lecinski's curriculum vitae:Studied German and Government at Notre Dame, MBA from Illinois.Teaches seminars and blogs about Jazz for newcomers.Has written for The Journal of the International Association of Jazz Record Collectors.Twelve years at Google, left as a VP.Literally wrote the book on marketing and AI, back in 2021 before it was the “in thing.”Awarded Professor of the Year at Northwestern Kellogg in 2022.It's a double-Jim conversation, as the two dive into the hottest topic in marketing...AI.---Learn more about AI:Marketing AI Institute: https://www.marketingaiinstitute.com/Andrew Ng's Courses on Coursera: https://www.coursera.org/instructor/andrewngKeynotes to Watch:Agentforce Keynote: Build the Future with AI Agents: https://www.salesforce.com/plus/experience/dreamforce_2024/series/agentforce_&_data_cloud_at_dreamforce_2024/episode/episode-s1e27Google Cloud CEO Thomas Kurian'a Keynote: https://cloud.withgoogle.com/nextAnd pickup Jim and Raj Venkatesan's book - The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing: https://a.co/d/9osop0BSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Washington Post Live
Top experts on the impact of AI on the workforce, education and the economy

Washington Post Live

Play Episode Listen Later Sep 24, 2024 45:34


Andrew Ng, founder of DeepLearning.AI, Raffaella Sadun, professor at Harvard Business School, and Matthew Beane, assistant professor at the University of California, Santa Barbara, join Washington Post Live to discuss how the AI revolution could transform America's workforce, economy and classrooms.

Startup Project
#81 Coursera's Engineer No 1 on Building AI agents for knowledge workers #AI #podcast #aicode #startup

Startup Project

Play Episode Listen Later Sep 22, 2024 1:01


Startup Project Podcast: Building AI Agents for Knowledge Workers with Lutra AI Jiquan Ngiam joins Nataraj to discuss the future of AI, from the rise of deep learning to the potential of AI agents for knowledge workers. They delve into [Guest Name]'s experiences working with Andrew Ng at Coursera and Google Brain, where he witnessed the power of scaling up compute and data in pushing the boundaries of AI. Timestamps: * **0:00 - Introduction:** Nataraj welcomes [Guest Name] to the show and introduces his impressive background. * **2:28 - Working with Andrew Ng:** [Guest Name] shares his experience working with Andrew Ng, emphasizing Ng's foresight and focus on scaling up neural networks. * **6:15 - The Importance of Data and Compute:** [Guest Name] highlights how data and compute became key drivers in the success of AI, using the example of AlexNet's breakthrough in 2012. * **12:25 - Democratizing Education with Coursera:** [Guest Name] discusses the early days of Coursera and the team's vision for democratizing access to education, especially in fields like machine learning. * **17:55 - Google Brain and the Rise of Transformers:** [Guest Name] reflects on his time at Google Brain, where he witnessed the emergence of transformers and their potential for generalizing across modalities. * **21:24 - The Limits of Scaling:** [Guest Name] questions the future of AI scaling, suggesting that we may be approaching a point of diminishing returns due to data limitations and the difficulty of creating truly effective synthetic data. * **28:13 - The Need for Data on Physical Tasks:** [Guest Name] proposes a bold idea: collecting real-world data on mundane tasks to train AI agents for robotics and other applications that require replicating human behavior. * **34:23 - Lutrei.ai: AI Agents for Knowledge Work:** [Guest Name] introduces Lutrei.ai, an AI agent designed to assist knowledge workers with tasks like research, data manipulation, and automation. * **42:49 - Different Approaches to AI Agents:** [Guest Name] compares Lutrei's approach to building AI agents with other common methods, highlighting the importance of separating data and logic for reliable and scalable solutions. * **45:38 - Choosing the Right Models:** [Guest Name] discusses the diverse landscape of AI models and how Lutrei leverages different models for different tasks, from small models for summarization to larger models for reasoning and planning. * **52:04 - AI Code Generation: Cursor vs. GitHub Copilot:** [Guest Name] shares his experience using Cursor, a code generation tool, and compares it to GitHub Copilot, highlighting the potential for AI to empower average developers. * **1:00:16 - The Future of AI Code Generation:** [Guest Name] predicts that AI code generation capabilities will become ubiquitous, and the key innovations will be in user experience and interaction design. * **1:05:43 - Consuming Information:** [Guest Name] shares his favorite sources of information, including podcasts, books, and news outlets. * **1:08:44 - Mentorship and Learning:** [Guest Name] reflects on the key mentors in his career, including Andrew Ng, Daphne Koller, and John Chen. * **1:12:34 - Advice for Early Career Professionals:** [Guest Name] advises young professionals to be voracious learners and prioritize gaining diverse experiences early in their careers. * **1:16:21 - The Motivation Behind Lutrei:** [Guest Name] explains his passion for pushing the boundaries of AI while simultaneously making it accessible and impactful for a wider audience. * **1:18:33 - Closing Thoughts:** Nataraj thanks [Guest Name] for sharing his insights and expresses his excitement for the future of Lutrei.ai. **Don't miss this episode to learn more about the exciting things happening in gen AI and how it's poised to revolutionize the way we work!**

Monday Morning Data Chat
#181 - Andrew Ng - Why Data Engineering is Critical to Data-Centric AI

Monday Morning Data Chat

Play Episode Listen Later Sep 16, 2024 27:46


Andrew Ng joins us to chat about AI, data engineering, and education. Enjoy!

The Data Exchange with Ben Lorica
Advancing AI: Scaling, Data, Agents, Testing, and Ethical Considerations

The Data Exchange with Ben Lorica

Play Episode Listen Later Sep 5, 2024 24:37


Dr. Andrew Ng is a globally recognized AI leader, founder of DeepLearning.AI and Landing AI, General Partner at AI Fund, Chairman and Co-Founder of Coursera, and Adjunct Professor at Stanford University. Subscribe to the Gradient Flow Newsletter:  https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon •  RSS.Detailed show notes - with links to many references - can be found on The Data Exchange web site.

The Pulse of AI
Groundbreaking AI with Andrew Maas: Inside Pointable and the Future of Retrieval Systems

The Pulse of AI

Play Episode Listen Later Aug 12, 2024 64:10


New Pulse of AI podcast is live! Season 6, episode 145. To be notified about future conversations with the leaders of the AI revolution sign up for our newsletter at www.thepulseofai.com  AI Pioneers: Andrew Maas on Pioneering Retrieval Systems and Deep Learning Join host Jason Stoughton in this exciting episode as he welcomes Andrew Maas, the visionary co-founder and CEO of Pointable. Andrew shares his journey through the world of artificial intelligence, from his groundbreaking work on data-centric deep learning at Apple to his pivotal role in founding roam Analytics, a natural language extraction platform acquired by Parexel. In this episode, Andrew delves into the innovative technologies behind Pointable, a startup revolutionizing retrieval systems for RAG-LLM workflows. He offers valuable insights for AI practitioners and founders, drawing from his extensive experience and academic background, including his PhD from Stanford University under the mentorship of Andrew Ng and Dan Jurafsky. Tune in to hear about Andrew's transformative work in AI, the future of retrieval systems, and what's next for Pointable. Whether you're an AI enthusiast or an aspiring entrepreneur, this conversation is packed with knowledge and inspiration!

FYI - For Your Innovation
An Artificial Intelligence Conversation with Andrew Ng

FYI - For Your Innovation

Play Episode Listen Later Aug 8, 2024 59:13


On this episode of FYI, ARK's Chief Futurist Brett Winton, and Chief Investment Strategist Charlie Roberts sit down with artificial intelligence (Al) luminary Andrew Ng to explore the deployment of artificial intelligence and the evolution of AI education. Andrew shares insights from his extensive career, including his work with Google Brain, Baidu, Coursera, and his current AI fund. We analyze the transformative potential of AI, especially in how large corporations can harness it, the progression toward agentic systems, and the contentious topic of open-source AI. This episode provides a comprehensive overview of AI's current status and future trajectory, offering invaluable insights for technology enthusiasts."For the last 10-15 years, there have constantly been a small number of voices saying AI is hitting a wall. I think that a lot of statements to that effect were all over and over proven to be wrong. I think we're so far from hitting a wall." -Andrew Ng Key Points From This Episode:- Andrew Ng's significant contributions to AI and education through various platforms- Insights into the deployment challenges and future potentials of AI in business- The role of agentic systems in advancing AI applications- The impact of open source on innovation and the AI industry- Distribution and data generation in AI's effectiveness

The Secret Sauce
TSS760 คุยเน้นๆ 1 ชั่วโมง Andrew Ng ผู้ทรงอิทธิพล AI โลก

The Secret Sauce

Play Episode Listen Later Aug 4, 2024 60:54


ชมวิดีโอ EP นี้ใน YouTube เพื่อประสบการณ์การรับชมที่ดีที่สุด https://youtu.be/x7zU9yI9KoA เคน นครินทร์ สัมภาษณ์พิเศษ Andrew Ng มนุษย์ที่รู้เรื่อง AI มากที่สุดคนหนึ่งของโลก เขาเป็นผู้ก่อตั้งบริษัท AI Fund, DeepLearning.AI และ Coursera รวมถึงบทบาทใหม่กับการเป็นบอร์ดบริหารให้กับ Amazon  ในฐานะที่เป็นผู้สร้าง Deep Learning The Secret Sauce จึงชวนพูดคุยประเด็นเทคโนโลยีที่น่าสนใจ เช่น AI รักมนุษย์ได้ไหม?, ถ้า AI และมนุษย์แข่งกันจีบผู้หญิงใครจะชนะ?, คุณคิดเห็นอย่างไรกับ Elon Musk ที่มองว่า AI คือภัยคุกคามมนุษยชาติ?, สิ่งที่คนประเมินเทคโนโลยีสูงเกินไปและต่ำเกินไปคืออะไร?, อนาคตอันใกล้ AI จะมีสติปัญญาเทียบเท่ามนุษย์หรือไม่?, ใช้ AI อย่างฉลาดทำอย่างไร? และประเทศไทยจะเตรียมตัวอย่างไรในคลื่นลูกใหม่นี้?

THE STANDARD Podcast
The Secret Sauce EP.760 คุยเน้นๆ 1 ชั่วโมง Andrew Ng ผู้ทรงอิทธิพล AI โลก

THE STANDARD Podcast

Play Episode Listen Later Aug 4, 2024 60:54


ชมวิดีโอ EP นี้ใน YouTube เพื่อประสบการณ์การรับชมที่ดีที่สุด https://youtu.be/x7zU9yI9KoA เคน นครินทร์ สัมภาษณ์พิเศษ Andrew Ng มนุษย์ที่รู้เรื่อง AI มากที่สุดคนหนึ่งของโลก เขาเป็นผู้ก่อตั้งบริษัท AI Fund, DeepLearning.AI และ Coursera รวมถึงบทบาทใหม่กับการเป็นบอร์ดบริหารให้กับ Amazon ในฐานะที่เป็นผู้สร้าง Deep Learning The Secret Sauce จึงชวนพูดคุยประเด็นเทคโนโลยีที่น่าสนใจ เช่น AI รักมนุษย์ได้ไหม?, ถ้า AI และมนุษย์แข่งกันจีบผู้หญิงใครจะชนะ?, คุณคิดเห็นอย่างไรกับ Elon Musk ที่มองว่า AI คือภัยคุกคามมนุษยชาติ?, สิ่งที่คนประเมินเทคโนโลยีสูงเกินไปและต่ำเกินไปคืออะไร?, อนาคตอันใกล้ AI จะมีสติปัญญาเทียบเท่ามนุษย์หรือไม่?, ใช้ AI อย่างฉลาดทำอย่างไร? และประเทศไทยจะเตรียมตัวอย่างไรในคลื่นลูกใหม่นี้?

5 Minutes Podcast with Ricardo Vargas
Why I Should Care About The Virtuous Cycle When Developing an AI Project?

5 Minutes Podcast with Ricardo Vargas

Play Episode Listen Later Jul 28, 2024 7:38


In this episode, Ricardo discusses the virtuous cycle in AI development, based on Andrew Ng's Coursera course, "AI for Everyone," and highlights the importance of creating AI projects that intersect with business value. Ricardo explains that more data improves algorithms and services, attracting more users, which generates more data and creates a virtuous cycle, leading to some companies dominating AI due to their vast data resources. Ricardo also suggests using existing large language models with fine-tuning for specific applications and recommends Andrew Ng's course for a better understanding of the fundamentals of AI. Listen to the podcast to learn more.

5 Minutes Podcast com Ricardo Vargas
Por que Devo Me Importar Com o Ciclo Virtuoso ao Desenvolver um Projeto de IA?

5 Minutes Podcast com Ricardo Vargas

Play Episode Listen Later Jul 28, 2024 7:09


Neste episódio, Ricardo discute o ciclo virtuoso no desenvolvimento de IA, com base no curso Coursera de Andrew Ng, "IA para todos" e ressalta a importância de criar projetos de IA que se cruzem com o valor do negócio. Ricardo explica que mais dados melhoram algoritmos e serviços, atraindo mais usuários, o que gera mais dados, criando um ciclo virtuoso, fazendo com que algumas empresas dominem a IA, devido aos seus vastos recursos de dados. Ricardo também e sugere o uso de grandes modelos de linguagem existentes com fine-tuning para aplicações específicas e recomenda o curso de Andrew Ng para uma melhor compreensão dos fundamentos da IA. Escute o podcast para saber mais.

This Is Robotics: Radio News
SPECIAL FOR KEYNOTE: This Is Robotics: Radio News #31

This Is Robotics: Radio News

Play Episode Listen Later Jun 30, 2024 67:55


2024: The Most Important Year in the History of Robotics!Companion podcast #31 to Keynote address at SuperTechFT 3 July 2024 Happy to be with you one and all. I'm Tom Green, your host and companion on this very special journey for 2024. We are only halfway through the year, and already 2024 has shown us that it is the most important year in the history of robotics.This podcast will show you why that is.This podcast is a companion to the live keynote address I will give at SuperTechFT in San Francisco on July 3rd 2024. I want to first thank Dr. Albert Hu, president and director of education at SuperTechFT, and to the staff and patrons of SuperTechFT for inviting me. The title of my keynote: 2024: The Most Important Year in the History of Robotics!What other year can possibly compete for top honors other than 2024?2024 eliminated the barrier to entry for digital programming by eliminating the need to code.As Tesla's former chief of AI, Andrej Karpathy put it: "Welcome to the hottest new programming language...English"2024 opened the door of AI prompt engineering to millions of new jobs and careers in millions of SME industries worldwide.So explains: Andrew Ng, investor and former head of Google Brain and Baidu.2024 converged GenAI with robotics, broadened robot/cobot applications, and freed robots from complexity of operation.So announced NVIDIA's CEO and founder Jensen Huang at the company's March meeting.2024 reinvigorated the liberal arts, creative thinking, expository writing, and language as vital new components in developing robotics applications.So reflects Stephen Wolfram physicist and creator of Mathematica2024 defined the need for the GenAI & the "New Collar" Worker Connection: Vitally needed workers for AI/robot-driven industry worldwide, and just maybe, the revitalization of America's middle class…or the middle class of any nation.Sarah Boisvert technologist, factory owner and wrote the book on the New Collar WorkforceSuddenly in mid-2024, technology has thrown us into a brand-new worldAnd it's only early July of 2024...can you believe it?“Artificial intelligence and robotics could catapult both fields to new heights.”The 4-Year Plight: SMEs in Search of Robots!Tech News May Fade, but Its Stories Are Forever! GenAI & "New Collar" ConnectionDid AI Just Free Humanity from Code?

That Was The Week
Is There an AI Bubble?

That Was The Week

Play Episode Listen Later Jun 30, 2024 33:35


Hat Tip to this week's creators: @PeterJ_Walker, @mgsiegler, @jglasner, @lennysan, @AndreRetterath, @alex, @pmarca, @nklsrh, @dmehro, @timmarchman, @adamclarkestes, @Kyle_L_Wiggers, @MTemkiContentsEditorial: Essays of the WeekIs there an AI Bubble?Robotics Startups On The Rise In 2024Behold: the HackquisitionThe Entrapment of AppleThe social radar: Y Combinator's secret weapon | Jessica LivingstonCan We Fully Automate Startup Investing?The 2024 IPO I'm Most Excited AboutVideo of the WeekThe true story -- as best I can remember -- of the origin of Mosaic and Netscape.AI of the WeekI Will F*****g Piledrive You If You Mention AI AgainPerplexity Is a B******t MachineWhat, if anything, is AI search good for?Andrew Ng plans to raise $120M for next AI FundNo MacBook Air Killer, All MacBook Air FillerHebbia raises nearly $100M Series B for AI-powered document search led by Andreessen HorowitzNews Of the WeekNo, a $100m + Series A Round isn't NormalIt takes ten years to succeed as a StartupElon Musk has won $56bn pay package despite judge ruling it void, Tesla arguesKleiner Perkins announces $2 billion in fresh capital, showing that established firms can still raise large sumsStartup of the WeekWebtoon Rises Modestly In IPO DebutX of the WeekAI Poetry Camera? Seriously?EditorialIt's Sunday, two days later than I usually send this out. Two excuses. I was in recovery from PTSD after the “debate.” And then I almost had a relapse watching England in the Euro last 16 game against Slovakia.I'm unsure of my mental state now (we won 2-1 in extra time). But the other, more important “game” is still undecided.But in AI, it seems everybody is getting PTSD from wild allegations that AI might kill the human race to now new suggestions that there may be a bubble in valuations for early-stage companies.The items in this week's newsletter are really good. MG Seigler, Alex Wilhelm and Peter Walker dominate. The first two are former TechCrunch writers (hats off to Mike Arrington for his talent-spotting). Peter is the leading contributor to VC data; he has access to Carta data and uses it super effectively.MG and Alex have relatively new newsletter sites - SpyGlass and Cautious Optimism, respectively). They are great observers and even better writers - subscribe. Links in their articles are below.Big tech seems to be running scared of AI regulation. This from MG Seigler's : No MacBook Air Killer, All MacBook Air FillerMicrosoft really s**t the bed here both from a security and PR perspective. And what's left sounds very 'meh'. It's almost like Microsoft forget the 'Copilot' part of 'Copilot+ PCs'. And certainly they forgot the '+' part.MG also wrote about the EU and Apple, claiming that the EU is seeking to entrap Apple by refusing to state what Apple can and cannot do with its AI intentions. Apple, in response, is saying it will not launch AI in Europe until the EU says what product flexibility it has.You have to smile. Apple plays this game super well.Finally, he has ‘Behold the Hackquisition', which shows how big tech avoids M&A blocks by buying teams instead of companies.Alex Wilhelm's anticipation of the Circle (USDC) IPO is a great example of his regular style and substance.Peter Walker heads up data storytelling at Carta (great title). This week, he also has three pieces, all originally posted on his LinkedIn profile.‘Is there an AI bubble' (my title) examines the spread of Series A venture funding valuations. He separates the percentiles and measures the spread between them, noting that the gap between the 50th percentile and the 95th is the widest ever - even wider than 2021 and 2022. This is for SaaS rounds that include much AI.In H1 2019, the 50th percentile for pre-money valuations was $26M (Series A SaaS companies only, primary rounds). The 95th pct at that time was $96M.Now that's a pretty large gap. We're talking a 3.7x jump from the middle to the top end.But today things are even more skewed.

That Was The Week
Accelerating to 2027?

That Was The Week

Play Episode Listen Later Jun 22, 2024 33:47


Hat Tip to this week's creators: @leopoldasch, @JoeSlater87, @GaryMarcus, @ulonnaya, @alex, @ttunguz, @mmasnick, @dannyrimer, @imdavidpierce, @asafitch, @ylecun, @nxthompson, @kaifulee, @DaphneKoller, @AndrewYNg, @aidangomez, @Kyle_L_Wiggers, @waynema, @QianerLiu, @nicnewman, @nmasc_, @steph_palazzolo, @nofilmschoolContents* Editorial: * Essays of the Week* Situational Awareness: The Decade Ahead* ChatGPT is b******t* AGI by 2027?* Ilya Sutskever, OpenAI's former chief scientist, launches new AI company* The Series A Crunch Is No Joke* The Series A Crunch or the Seedpocalypse of 2024 * The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels* Video of the Week* Danny Rimer on 20VC - (Must See)* AI of the Week* Anthropic has a fast new AI model — and a clever new way to interact with chatbots* Nvidia's Ascent to Most Valuable Company Has Echoes of Dot-Com Boom* The Expanding Universe of Generative Models* DeepMind's new AI generates soundtracks and dialogue for videos* News Of the Week* Apple Suspends Work on Next Vision Pro, Focused on Releasing Cheaper Model in Late 2025* Is the news industry ready for another pivot to video?* Cerebras, an Nvidia Challenger, Files for IPO Confidentially* Startup of the Week* Final Cut Camera and iPad Multicam are Truly Revolutionary* X of the Week* Leopold AschenbrennerEditorialI had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence.So I had to read it, and it is this week's essay of the week. He starts his 165-page epic with:Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.So, Leopold is not humble. He finds himself “among” the few people with situational awareness.As a person prone to bigging up myself, I am not one to prematurely judge somebody's view of self. So, I read all 165 pages.He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so.His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines.He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results).By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers.Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts.At that point, I felt I was reading a manifesto for World War Three.But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:* Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn't some random community of coders writing an innocent open source software package; this isn't fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it'll be the most important thing we ever do. * America must lead. The torch of liberty will not survive Xi getting AGI first. (And, realistically, American leadership is the only path to safe AGI, too.) That means we can't simply “pause”; it means we need to rapidly scale up US power production to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won't cut it anymore, and it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first. * We need to not screw it up. Recognizing the power of superintelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we're summoning is one we cannot yet fully control. These are manageable—but improvising won't cut it. Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.I persisted in reading it, and I think you should, too—not for the war-mongering element but for the core acceleration thesis.My two cents: Leopold underestimates AI's impact in the long run and overestimates it in the short term, but he is directionally correct.Anthropic released v3.5 of Claude.ai today. It is far faster than the impressive 3.0 version (released a few months ago) and costs a fraction to train and run. it is also more capable. It accepts text and images and has a new feature that allows it to run code, edit documents, and preview designs called ‘Artifacts.'Claude 3.5 Opus is probably not far away.Situational Awareness projects trends like this into the near future, and his views are extrapolated from that perspective.Contrast that paper with “ChatGPT is B******t,” a paper coming out of Glasgow University in the UK. The three authors contest the accusation that ChatGPT hallucinates or lies. They claim that because it is a probabilistic word finder, it spouts b******t. It can be right, and it can be wrong, but it does not know the difference. It's a bullshitter.Hilariously, they define three types of BS:B******t (general)Any utterance produced where a speaker has indifference towards the truth of the utterance.Hard b******tB******t produced with the intention to mislead the audience about the utterer's agenda.Soft b******tB******t produced without the intention to mislead the hearer regarding the utterer's agenda.They then conclude:With this distinction in hand, we're now in a position to consider a worry of the following sort: Is ChatGPT hard b**********g, soft b**********g, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft b**********g. However, the question of whether these chatbots are hard b**********g is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.This is closer to Gary Marcus's point of view in his ‘AGI by 2027?' response to Leopold. It is also below.I think the reality is somewhere between Leopold and Marcus. AI is capable of surprising things, given that it is only a probabilistic word-finder. And its ability to do so is becoming cheaper and faster. The number of times it is useful easily outweighs, for me, the times it is not. Most importantly, AI agents will work together to improve each other and learn faster.However, Gary Marcus is right that reasoning and other essential decision-making characteristics are not logically derived from an LLM approach to knowledge. So, without additional or perhaps different elements, there will be limits to where it can go. Gary probably underestimates what CAN be achieved with LLMs (indeed, who would have thought they could do what they already do). And Leopold probably overestimates the lack of a ceiling in what they will do and how fast that will happen.It will be fascinating to watch. I, for one, have no idea what to expect except the unexpected. OpenAI Founder Illya Sutskever weighed in, too, with a new AI startup called Safe Superintelligence Inc. (SSI). The most important word here is superintelligence, the same word Leopold used. The next phase is focused on higher-than-human intelligence, which can be reproduced billions of times to create scaled Superintelligence.The Expanding Universe of Generative Models piece below places smart people in the room to discuss these developments. Yann LeCun, Nicholas Thompson, Kai-Fu Lee, Daphne Koller, Andrew Ng, and Aidan Gomez are participants. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe

SlatorPod
#215 Humanless LSP as a Fun Weekend Project

SlatorPod

Play Episode Listen Later Jun 13, 2024 34:23


Florian and Esther discuss the language industry news of the week, giving a recap of SlatorCon London and exploring some use cases from the Slator Pro Guide: Language AI for Consumers.Florian talks about Andrew Ng's recent project on agentic machine translation, which involves using large language models (LLMs) to create a virtual language service provider (LSP).The duo touch on Apple's recent Worldwide Developer Conference, where Apple Watch is set to get a translation widget and also recently announced a new translation API.Florian shares RWS's half-year financial results, where despite declines in revenue, the company's stock rose by 20%, likely due to investor perception of AI-enabled services and new product offerings like Evolve and HAI gaining traction.Esther talks about DeepL's USD 300m funding round, which valued the company at USD 2bn, a testament to the growing interest in AI models. She also covers Unbabel's launch of TowerLLM, which claims to outperform competitors like Google Translate and DeepL.In Esther's M&A corner, Keywords Studios eyes a GBP 2.2bn deal from Swedish private equity firm EQT, Melbourne LSP Ethnolink buys Sydney-based competitor Language Professionals, and ZOO Digital acquires Italian dubbing partner LogoSound.Esther gives a nod to the positive financial performances of companies like ZOO Digital and AMN's language services division, with more mixed results for Straker.

Learn With Thai Van Linh
E7: Nhìn AI Trên Vai Người Khổng Lồ: Andrew Ng Và Elon Musk | Làm Bạn Với AI

Learn With Thai Van Linh

Play Episode Listen Later Jun 6, 2024 13:38


Trong tập này, Linh sẽ cùng bạn lắng nghe 2 luồng quan điểm trái chiều về AI dưới góc nhìn của Andrew Ng và Elon Musk. Chúng ta sẽ cùng nhau khám phá cách AI thay đổi tích cực thế giới kinh doanh theo góc nhìn của Andrew Ng. Cũng như những lo ngại về tương lai không kiểm soát, nhiều rủi ro tiềm ẩn dưới góc nhìn của Elon Musk. Video này sẽ MỞ RỘNG TẦM NHÌN của bạn về những gì bạn không biết mình không biết. Giúp bạn nâng cao tư duy chiến lược, gia tăng lợi thế cạnh tranh của mình trong thị trường việc làm hiện tại. Bạn có thể theo dõi nội dung dưới dạng blog tại link: https://thaivanlinh.com/blogs/lam-ban-voi-ai/nhin-ai-tren-vai-nguoi-khong-lo-andrew-ng-va-elon-musk Cuối cùng, đừng quên nhấn nút đăng ký và bật chuông thông báo để nhận thêm các video mới về trí tuệ nhân tạo từ Linh nhé! ---

The Implications of AI on the Global Balance of Power with Alex Wang, Andrew Ng, Jack Clark and Cory Booker

Play Episode Listen Later Jun 4, 2024 33:03


Tune in to today's special episode airing a recent panel with the founders of Scale AI, Anthropic, and AI Fund who gathered in Washington DC to discuss China as an adversary. They argue that the papers out of Tsinghua University are just as impressive as those coming out of American universities. China is just as creative, but maybe even more motivated. While discussions of regulations have encompassed certain restraints, Alex Wang, Andrew Ng, and Jack Clark argue that we're not moving fast enough (moderated by US senator Cory Booker). This session was recorded live at The Hill & Valley Forum in 2024, a private bipartisan community of lawmakers and innovators committed to harnessing the power of technology to address America's most pressing national security challenges. The Hill & Valley podcast is part of the Turpentine podcast network. Learn more: www.turpentine.co RECOMMENDED PODCAST - The Riff with Byrne Hobart Byrne Hobart, the writer of The Diff, is revered in Silicon Valley. You can get an hour with him each week. See for yourself how his thinking can upgrade yours. Spotify: https://open.spotify.com/show/6rANlV54GCARLgMOtpkzKt Apple: https://podcasts.apple.com/us/podcast/the-riff-with-byrne-hobart-and-erik-torenberg/id1716646486 SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ CHAPTERS: (00:00) Intro (03:29) Assembling the founders of Anthropic, Scale AI, and AI Fund (04:45) Predictions for AGI (08:21) Navigating AI innovation amidst regulation (16:15) Global AI competition and the urgency of Innovation (24:34) Empowering future generations

Future-Proof Podcast by CO/AI
The Future of AI Agents and Agentic Workflows May 31,2024 CO/AI

Future-Proof Podcast by CO/AI

Play Episode Listen Later May 31, 2024 71:59


Hosts: Anthony Batt, Shane Robinson and Francesca Vera from the Future-Proof Podcast by CO/AI - The conversation explores the concept of AI agents, also known as and agentic AI (AI Agents), and their potential impact on various aspects of work and life. It delves into the definition of AI agents, their role in reducing cognitive load, and the user experience. The discussion also touches on the potential use cases of AI agents and the challenges associated with their development and implementation. The conversation delves into the future of AI agents and agentic  (AI Agent) workflows, exploring their impact on various industries and job roles. It emphasizes the importance of human connection and the need for AI literacy to adapt to the evolving landscape of work. The discussion also touches on the potential for new entrepreneurial opportunities and the value of interpersonal skills in the age of technological advancement.AI Agent Patterns discussed:Reflection: The LLM examines its own work to come up with ways to improve it.Tool use: The LLM is given tools such as web search, code execution, or any other function to help it gather information, take action, or process data.Planning: The LLM comes up with, and executes, a multistep plan to achieve a goal (for example, writing an outline for an essay, then doing online research, then writing a draft, and so on).Multi-agent collaboration: More than one AI agent work together, splitting up tasks and discussing and debating ideas, to come up with better solutions than a single agent would.Andrew Ng links    • What's next for AI agentic workflows ...    / andrew-ng-on-agentic-workflows  AgentBench https://arxiv.org/Join our community: getcoai.com Follow us on Twitter or watch us on YoutubeGet our newsletter!

The Jason & Scot Show - E-Commerce And Retail News

EP319 - Amazon Q1 2024 Recap http://jasonandscot.com Join your hosts Jason "Retailgeek" Goldberg, Chief Commerce Strategy Officer at Publicis, and Scot Wingo, CEO of GetSpiffy and Co-Founder of ChannelAdvisor as they discuss the latest news and trends in the world of e-commerce and digital shopper marketing. Episode Summary: In this episode, Jason "Retailgeek" Goldberg and Scot Wingo dive deep into Amazon's first quarter results for 2024, analyzing the company's performance in various segments such as retail, offline and online sales, marketplace, AWS, and advertising. They also explore the impact of AI on Amazon's business and provide insights into the company's future guidance for Q2 2024. Amazon Q1 2024 Earnings Release Amazon Q1 2024 Earnings Call Transcript In our latest episode, Jason and Scott cover a range of topics, starting with their reflections on recent events such as May the 4th and Cinco de Mayo. Jason shares intriguing stories from his extensive travels and interactions with listeners worldwide. Scott delves into the intersection of e-commerce and the auto industry, honing in on Carvana. The duo also delves into the U.S. Department of Commerce retail indicators data, shedding light on trends in retail sales and e-commerce growth. The conversation pivots towards Amazon's recent earnings report, contextualizing it within the realm of AI investments by tech giants like Meta and Alphabet, offering valuable industry insights and analysis. The discussion continues with a focus on Amazon's earnings report, zooming in on concerns around AWS amid heightened competition from Alphabet and Azure. The rising trend of AI investments, particularly in data training applications, is explored, alongside the growing popularity of open source AI models due to cost and privacy considerations. Despite a conservative Q2 guidance, Amazon impresses with robust revenue that surpasses Wall Street expectations, particularly in operating income. The retail segment shows exceptional growth, exceeding operating income estimates for both domestic and international divisions. Notably, Amazon's performance in brick-and-mortar stores, spearheaded by Whole Foods, demonstrates resilience with a 6.3% growth rate. AWS stands out with a 17% growth, dispelling market share concerns and showcasing accelerated revenue growth, illustrating Amazon's continuous growth potential and innovation prowess. Scott delves deeper into Amazon's positive quarterly earnings report, emphasizing the remarkable revenue performance, especially in operating income. Insights are shared on Amazon's successful agnostic approach to LLM models and the potential advancements in generative AI. The conversation shifts towards the burgeoning ads business at Amazon, underlining its profitability and future growth prospects. Scot also outlines Amazon's Q2 guidance and the potential impacts of consumer spending patterns on the retail sector, including concerns about changing consumer behaviors and economic pressures shaping market dynamics. Jason complements the discussion with additional perspectives on consumer behavior and economic influences reshaping the market landscape. Furthermore, we embark on a detailed exploration of supply chain logistics, with a spotlight on Amazon's expansion into third-party logistics services, revolutionizing traditional retail strategies by sharing proprietary capabilities for wider adoption. Insights from Andy Jassy shed light on Amazon's logistics business approach. The conversation expands to include how companies like Spiffy are embracing a similar model of sharing proprietary products to drive innovation and revenue growth, showcasing an evolving landscape of retail innovation. The podcast unpacks the complex world of grocery retail, highlighting Amazon's experimental forays like Just Walk Out technology and the Amazon Dash cart, while examining the challenges in delineating Amazon's grocery sector strategy. A comparison is drawn between Amazon's strategies and those of rivals like Walmart and Target, who are adapting their product offerings to match evolving consumer preferences, offering a comprehensive view of the dynamic retail and supply chain management sphere. Dive into our engaging discussion, explore retail dynamics, and keep a lookout for more insightful content. Don't forget to like our facebook page, and if you enjoyed this episode please write us a review on itunes. Episode 319 of the Jason & Scot show was recorded on Sunday, May 5th, 2024. Chapters 0:23 The Jason and Scott Show Begins 2:56 World Travel Adventures 5:53 Commerce Tools Elevate Show 6:53 Jason's World Tour Plans 7:22 Where in the World is Retail Geek? 20:43 Amazon's First Quarter Earnings 23:23 Sandbagging Strategy 26:45 Amazon's Dominance in E-commerce 27:44 Online Segment Growth Analysis 28:53 Offline Store Segment Analysis 31:35 Spotlight on AWS Performance 34:32 Data at AWS 42:02 Gen AI Revenue Growth 46:24 Consumer Pressure 49:56 Supply Chain Evolution 53:46 Leveraging Technology 58:08 Disruption in E-commerce 1:01:54 Amazon's Grocery Strategy 1:05:01 Retail Industry News Transcript Jason: [0:23] Welcome to the Jason and Scott Show. This is episode 319 being recorded on Sunday, May 5th, 2024. I'm your host, Jason Retail Guy Goldberg, and as usual, I'm here with your co-host, Scott Wingo. Scot: [0:37] Hey, Jason, and welcome back, Jason and Scott Show listeners. It's been a while, but first, happy Cinco de Mayo, and also a belated May the 4th, Jason. Did you have a good Star Wars day? Jason: [0:49] I did. I did. I feel like Star Wars Day always makes me think of the podcast because I feel like we have spent many of them in my latter life together. Scot: [1:01] Yeah, absolutely. Any exciting new Star Wars experiences or merch? Jason: [1:08] No, I understand you got some vintage merch. merch. Scot: [1:13] It's not, but they, back when I was a kid, you would go and if you went every week to, I think it was Burger King, you would for the, I think it was Empire. I have the Empire right here. So definitely Empire, but you would get a glass. Now it turns out these were full of lead paint, which would kill you, but that was the downside. Jason: [1:32] Not recommended for drinking. Scot: [1:33] You got a very, yes, I never, being a collector, I never drank out of them. So that's good. Jason: [1:37] Saved your life right there. Scot: [1:38] Yes, but I did drink out of the Tweety Bird. So that me, me. I'm sure I got some yellow lead paint from a twitty bird glass. Anyway, so they came out with a Mandalorian kind of homage to those glasses and they were at the Hallmark store of all places, not where I usually hang out, but I got to go to a Hallmark store and the little ladies that worked there were, I wish them all an awesome May the 4th. And they looked at me like I was from another planet and it was hilarious. My wife's like, stop, they don't know what you're doing. Jason: [2:07] Wait, they didn't have a big May 4th section in the Hallmark store? Scot: [2:11] They did. The little ladies didn't know. Jason: [2:13] The overlap of people that still buy Papyrus cards and celebrate May 4th is probably not great. Scot: [2:21] It was very humbling. It was a humble May the 4th, but I got my glasses and I was happy. I'm happy for you. And then tonight we had tacos for dinner, so I'm hitting all the holidays. Jason: [2:30] I feel like we should have tacos for dinner every night, whether it's Cinco de Mayo or not, but I'm i am happy for that. Scot: [2:35] We do have a lot of tacos but this was a special single denial edition. Jason: [2:42] Well, very well done, my friend. Scot: [2:44] Thanks. Well, listeners of the pod have been all over me. They're like, why aren't you recording? And I said, it's not me. It's Jason. It's Jason. Because you have been traveling Scot: [2:55] the earth, spreading retail geek goodness. Tell us, we are way far behind on trip updates and all the different countries. It's like you're playing, do you have like a little travel bingo where you're just like punching, what is it, 93 countries? Jason: [3:09] I do. They call it a passport. Oh, nice. Yes. Scot: [3:13] That, uh, little book that you get to carry. Yeah. Jason: [3:15] Yeah. Yeah. Yeah. I have been on a lot of trips and it sounds like you and I may be telling complimentary lies because I also, I've had an opportunity to meet a lot of listeners in the last, we'll call it seven weeks and which they're always super nice. And it's always super fun to talk to people. And obviously they're, you know, strangers recognize my voice in line at Starbucks at all these e-commerce shows. And then we strike up a conversation. And then the next question is always, where the heck is Scott? Because they're always disappointed to meet me and not you. And now the new thing is, and why aren't you producing more frequent shows? And my answer is always that you're dominating the world at Get Spiffy and that you're too busy. Scot: [4:00] Uh-huh. I see. Okay. Jason: [4:02] Well, we're both very busy. Scot: [4:05] You're traveling more than I am. I'm busy washing cars. Jason: [4:08] Yes. I think both are fairly true, but I did finish a grueling seven-week stint where I got to come home a couple of times on the weekends, but I basically had seven weeks of travel back to back. In my old life, that would not have been that atypical, but post-pandemic, The travel has been a little more moderate. And I have noticed that I have my travel muscles have atrophied and I don't really want to redevelop. Jason: [4:35] So the seven weeks was a lot. Please don't ask me for trip reports for all the commerce events because I kind of can't remember some of them. They're all a little bit of a blur. But I was at Shop Talks, I think, since the last time we talked, which is, of course, probably the biggest show in our industry. And that was a very good show. I did get to see a lot of our mutual friends and a lot of fans of the show there. So that was certainly fun. And maybe in another podcast, we can do a little recap of some of the interesting things that came out of Shop Talk. I did produce a couple of recaps in other formats for work clients, so we could certainly pull something together. I also went to a vendor show. One of the e-commerce platforms out there is called Commerce Tools, and they had their annual customer show, which is called Elevate in Miami. So I got a chance to go visit there. They're one of the commerce platforms that I would say is winning at the moment in the kind of pivot away from the old school monoliths to these new sort of SaaS-based solutions. And commerce tools in particular are kind of pioneers in pushing this actual certification around a more modern earned stack that they they coined mock. And I think I think we've had Kelly from from commerce tools on the on the podcast Jason: [5:51] in the past to talk about that. But that was a good show. I got to meet a lot of listeners there. And a funny one, several listeners were like. Jason: [5:59] I would apologize for the, the, our publishing schedule lately. And they're like, I'm cool with it. I like that. Like you don't do a show if there's not something worthwhile. And then, you know, when I do get a show, it's like a treat. So I don't know if they're being honest or not, but that made me feel a little better about some of our, our, our Tardis shows lately. So those, those were good events. I also spent a week in India with some clients and that super interesting, a lot of commerce activity going on there, a lot of different market dynamics than here. So that's kind of intellectually pretty fun to learn about and see what's working there that might be working here or what, you know, why things tend to play out differently there. So that's interesting. And then I have a lot more international trips booked right now. Jason: [6:48] So coming up, I'm going to Barcelona, London, Paris, and Sao Paulo. So if anyone either has any favorite retail experiences in any of of those cities, please send them my way. I'll be doing store visits in all those cities. And if you're based in any of those cities, also drop me a line. Hopefully we can do some meetups while I'm out there. Scot: [7:07] Cool. It's Jason's world tour. You can do a little pod while you're there. Jason: [7:12] We have done a bunch of international pods in the distant past. I remember hotel rooms in South Korea and all over the place, Jason: [7:19] Japan that we've, we've cut shows from. So, so totally could. Scot: [7:23] Yeah. We'll have to do it. Where in the world is retail geek? That could be the theme song. I just sampled that. Jason: [7:30] Yeah. So besides cleaning the world's cars, what have you been up to, Scott? Scot: [7:35] Well, it's kind of funny. My worlds are colliding. So a lot of the analysts that you and I know from the e-commerce world are creeping into the auto world and their gateway drug is Carvana. So in the world of retail, we have Amazon, obviously. Well, Carvana is kind of Amazonifying used cars. They had a bit of a drama kind of situation. They were the golden child of online cars. And then they totally pooped the bed. They did this acquisition. They loaded up with debt. And then after, I think it was 21. So they had a good COVID. They surged. And then the debt got in front of them. Used car prices bop around and they kind of like got in an open door situation where they had bought a lot of cars for more than they were worth suddenly. And then they plummeted and everyone thought they were going out of business, but they have had a resurgence. So it's causing a lot of the internet analysts to now pick up auto tech or mobility or whatever you want to call it. So it was fun. I got to do a live chat with Nick Jones. He's been a friend of the show. I don't think we've had him on due to some compliance stuff that his company has rules around, but he's at this firm JMP and it was kind of wild to talk about, with someone about both Amazon and what we're doing at Spiffy, which is basically a lot of Amazon principles applied to car care. So it was interesting to have someone reach out and say, hey, I think this is a thing. And everyone tells me I should talk to you about it. And I was like, oh, yeah, I would love to. So it's kind of fun. Jason: [9:01] That's very cool. And isn't it also a thing, I think half the vehicles on the road are now owned by Amazon. So I assume that's an overlap too. too? Scot: [9:09] Yeah, not half, but a lot are. The number of last mile delivery vehicles are very, very large. And we work with a lot of them, so it's kind of fun. I started spiffy somewhat to get away from Amazon and still all I can talk about. Nope. So embrace it. I love Amazon. Love me some Amazon, Jason. Jason: [9:29] I'm glad you do. I love them too, but I feel like I spend most of my career You're unsuccessfully helping people compete with them. Scot: [9:38] Hey, got to play one side of the coin. It's a gig. You're going to be more like them or how to fight them. Jason: [9:43] It's a gig. It is indeed. Yeah. Scot: [9:46] Cool. I thought we are going to talk about some Amazon news. But before we jump in, you have done your magic with your data analysis interns. And I'm sure there's an LLM and an AI thrown in there. Let's start with some of the things you're seeing in commerce trends from the data that's out there. Jason: [10:07] Yeah. So as everyone knows, I have a little bit too much of an infatuation with the U.S. Department of Commerce retail indicators data. And these guys, you know, publish monthly estimates of retail sales in a bunch of categories. And, you know, we've talked about this many times on the show, but broadly over the last several years have been really interesting in retail. 2020, 2021, and 2022 were the greatest three years in the history of retail. Like we mailed like $6 trillion in economic stimulus. People didn't travel or go to restaurants as much. And so we sold way more goods than ever before. And so those three years, retail grew respectively at like 8%, 14%, and 9%. The 20 years prior, retail averaged about 4% a year in growth. So normally pre-pandemic, you'd expect 4% growth. We had these three, you know, wildly pandemic influence years where we grew really fast. And then last year we finished a little below 4%. So, so we were around, I want to say it was like 3.6%. So it was growth. It would, it would have been in line with pre-pandemic growth, but it certainly felt like a significant deceleration from those heady pandemic years. And so, you know, people are super interested to see how does 2024 play out? Does it? Jason: [11:32] Kind of return to pre-pandemic levels, like what is the new normal? Jason: [11:37] And we now have the first quarter's data from the U.S. Department of Commerce, and I would call it kind of a mixed bag. If you just look at the raw retail data that the U.S. Department of Commerce publishes, they're going to tell you that retail grew in the first quarter 2.8%. So that's a little anemic, right? Compared to historical averages, that's not a great growth rate. Most of the practitioners that follow this podcast care about a particular subset of retail that the National Retail Federation has dubbed core retail. And so the National Retail Federation pulls gas and automobiles sales out of that number. And gas is a decent size number and it's very volatile based on the commodity prices of gas. And auto is a huge number that has, as you're well familiar, its own idiosyncrasies. And so that's how they justify taking those two out. And if you take those two out and you get this core retail number, retail in the first quarter grew 3.9%. So kind of to align with how the NRF talks about retail, we'll say Q1 overall was 3.9%, which is very in line with the pre-pandemic historic average. So disappointing by pandemic standards, but kind of traditionally what we would expect. Jason: [13:05] What is unique in that number is. Jason: [13:09] That it's very bifurcated. There are clear winners and losers, both by categories and specific practitioners. So if you break down the categories, e-commerce is the fastest growing chunk of retail. I'm sure we'll talk more about that. Restaurants were the next fastest growing categories. And categories like mass merchants and healthcare providers outperform that industry average, every other segment of retail underperformed the industry average. So things like furniture stores did the worst, building materials did really poorly, gas stations did very poorly, electronics did poorly, and side note, electronics have been the worst performer since the pandemic, which is kind of interesting and challenging. So you've had this weird couple categories doing really well, a bunch of categories doing really poorly. And then within the categories even, if you look at the public company's individual earnings calls, what you tend to see is a couple of big players performing really well in overall retail, that's Amazon and Walmart. And then a lot of other retailers really struggling. So that even that's like in general merchandise, it's Amazon and Walmart that are lifting the boats. And it's folks like Target traditionally that have performed really well are actually struggling at the moment. So the average is kind of hard to follow at the moment. Jason: [14:37] But that is kind of how things play out. And then we have some preliminary e-commerce data, but the actual Q1 e-commerce number that the U.S. Department of Commerce publishes will publish on May 17th. So that's 12 days from now. Jason: [14:53] And crunching the numbers that we have available at the moment, that growth is likely to come in at somewhere between 8% and 10%. I'm guessing more like 8% or 9% growth. And so that also is twice as good as overall retail, and it's more than twice as good as brick-and-mortar retail. But that is noticeably slower than the historic e-commerce growth rates pre-pandemic. So kind of file those two numbers away. The overall retail industry is growing at 3.9%. The overall e-commerce industry is growing at about 9%. And then we have our friends at Amazon that dropped their earnings announcement just before May 4th so that they could celebrate May 4th, I think. Scot: [15:39] Yeah, yes, that's a good setup. And without further ado, let's talk about Amazon's fourth quarter. It wouldn't be a Jason Scott show without a little bit of... Scot: [16:01] That's right. On April 30th, Amazon announced their first quarter results. And the setup coming into these, so you had the data you talked about, but like to drill in a little bit. We had Meta, the artist formerly known as Facebook, and Alphabet, the artist previously known as Google. They announced and they both basically told Wall Street, AI is the cat's pajamas and we're going to spend anywhere between $10 and $40 billion of capital expenditures on it, meaning NVIDIA chips. So it turns out the way to play all this is basically buying NVIDIA. So hopefully you bought some NVIDIA stock. Maybe this is not a stock recommendation or when it's too late, so... And also don't take stock recommendations from podcasters. Anyway, so there was all this angst and people were a little freaked out coming into the Amazon results because Meta was down like pretty substantially, 20 to 30 percent. And Alphabet was also up substantially. You also had Microsoft come in there and they really crushed it. Their Azure is really lighting it up with AI. And they announced that they were going to invest a lot. And there's this rumor that a $100 billion project, it's got a name like Starship or something, but it's not Starship. Spaceship? Stardust? I don't know what it is. But it's going to be this mega data center, and they literally can't find a place to put it because it's going to consume so much power. So they're going to have to maybe build a nuclear plant next to it or some wacky thing. Scot: [17:31] Anyway, that was the setup. up. So coming in, Wall Street was very, very concerned about Amazon's AWS division, which is their cloud computing. Because if Alphabet is building out their infrastructure, and so is Azure, that's the two biggest competitors for AWS. And is AWS getting its fair share? And is it going to announce that it's going to have to go build some $40 billion kind of a thing? Also, another Another thing, and I'm kind of curious on if you're seeing this with your clients, but in the, I follow this, you know, the AI, you can't do much without seeing AI everywhere. But the part I'm most interested in is what are big enterprises spending money on? This is like your Fortune 500s. They're all experimenting and really getting into it. And where they're finding a lot of good use cases is training on their data. So they'll say, you know, hey, I'm Publisys. How many documents do you think are inside of Publisys? I don't know, 8 trillion documents. Documents and you know wouldn't it be helpful just the ones I created and who is this retail geek and he's he's created uh you know 90 of those and you know so you know imagine you're starting new at publicists you're gonna be like where do I start going through some of these documents for us and if you had a chat bot that was like hey I've read all that you know I can navigate you through everything that's been published or you know whatever I'm certainly you. Scot: [18:50] Providing a very big metaphor, certainly be more divisional and all this kind of stuff. But that's where big companies are spending the bulk is they're taking their data in whatever format it's in, be it a relational database, a PDF, whatever it is, they're trying to train it. They don't want it to go up into the, they don't want to train the LLM so that other people get the benefit of that and can see any confidential data. So that's really important. So it needs to be gated in these types of things. Because of that use case, open AI is not great because people are very worried. A, it's very expensive and it's only an API. So OpenAI hosts itself and you call it through an API. Scot: [19:25] Those API calls are very expensive. They're getting, as OpenAI has gotten more popular, there's more latency. It's taking forever to get answers out of this thing. And a lot of people are very concerned that even though there's ways to call the API such that it's in a window and not being trained, that maybe it leaks in there. So because of all these elements, the open source models are becoming very popular. And right around the time Meta announced, they announced their Llama, which has become quite popular. And what's nice is you can host it wherever you want. And it's kind of like WordPress, where if you are a serious WordPresser, you can host it somewhere yourself, and you can kind of understand that. Otherwise, there's other people that will host it for you. But it has the nice feature of you're just getting the weights and whatnot, and it's it's pretty clear, it's pretty obvious, it's not training itself on your data. So a lot of people like it because it's quote unquote free. It's not an API usage based. It's a pay once to set it up, pay for some resources type thing and you're done. And it's also not going to train on the data. That's one of many. There's probably 10 or 20 pretty commercial grade open AIs out there. Scot: [20:38] Okay. So that's kind of the setup to get to the earnings. things. So from a big picture, this was a really good quarter. Asterix, the guide made Wall Street a little bit nervous. So- Scot: [20:53] And one of our research analysts just said it's Stargate, which is also a sci-fi series. They must have that on Prime Video or something. There's probably some callback there. Scot: [21:01] So they beat for the quarter Q1, but then they also kind of tell you what's going on the next quarter. Amazon doesn't provide fully your guidance. They just kind of give you a snippet. So when they report one quarter, a quarter, they then tell you what they think the next quarter is going to do. So Wall Street got a little bit ahead of its skis, and the guide for Q2 was below what Wall Street wants. So it wasn't what we'd call a beat and a raise, which is the current quarter was a beat and the next one they increased. It was a beat and a guide down. So that probably tampered Wall Street. But ever since Jassy came in, Andy Jassy, this has been his MO is to be pretty conservative because Wall Street's very much an expectation engine. And the more, if you can beat and tamp down expectations, it makes it, it's a little bit rougher in the short term from a stock price, but it makes next quarter better and then so on and so forth. So it's a smart way to manage the long-term vibe of the stock, the mindset, the expectations around your stock. Okay. So revenue came in at $143 billion versus Wall Street at $142. So pretty much in line. But most importantly, where Amazon really threw people off was on operating income. Yes, Amazon is profitable. This is the proxy for operating income. True Amazonians would tell you, no, it's cashflow. We can go into that, but this is kind of the way they report to Wall Street. So this is kind of the standard operating system, if you will. So this is what we're going to use, but it's a proxy for cashflow. Scot: [22:28] That was 15 billion for the quarter and Wall Street expected 11. Well, you know, 4 billion on a world of 143 doesn't sound like much, but between 11 and 15, that's a very material beat. What is that? Like 38%, something like that. Scot: [22:44] So that was a really nice surprise. And, you know, Amazon goes through these invest and harvest periods and everyone's been feeling like they're going to be back in investing which would mean they're going to start lowering operating income as they invest but it's actually kind of beating expectations, also this is the fifth quarter amazon has come in at the high end of its guidance or above its guidance since basically you know on operating income and that corresponds with when jassy came in so this is his mo right now is to kind of like beat and lower beat and lower you know exceed expectations tamp them down not get not get ahead of his skis and it's working really well. Jason: [23:24] Sandbagging for the win. I like it. Scot: [23:26] Yes, it is. Having run a public company, this is a lesson I learned painfully. So that's something we can talk about over beer sometime. Jason: [23:33] I will book that date. Yeah. And the retail business sort of followed in line with that. They had like some nice growth, but like the real standout number was the improvement in margins and the significant positive operating income from the retail segment. So I think the actual operating income from U.S. Retail was like $5 billion and the Wall Street expectations were 4.3. So again, that was another strong beat. Total revenue, which revenue is not the same thing as retail sales, as we've talked about on the show many times, that we would use GMV as a proxy for that. But revenue was $86.3 billion for the quarter, which I think was in line with the analyst expectations. Jason: [24:27] And I think this was the largest operating income that Amazon has ever reported for the retail business. So that was super interesting on the domestic side. Traditionally, domestic has done pretty well and international has been a money loser because, you know, they've been less mature. they've been investing a lot in growing international and they haven't had the same kind of margins. This was the first quarter that they reported positive operating income for the international division. So that's another super encouraging sign for investors that maybe they've kind of passed that inflection point on a lot of their international investments that they've made in the EU and Japan and the UK, which reminds me is not part of the EU anymore. Jason: [25:13] So so they kind of beat beat international expectations across the board on income. Revenues were lower. So revenues were like thirty one billion dollars, which was below expectation. Jason: [25:25] But they they earned like nine hundred million in operating income. And I want to say the the the Wall Street expectation was like six hundred million. So so again, like a 30 percent beat, which is pretty, pretty darn good. Good. They also, a bunch of analysts have, you know, taken these revenue numbers and they try to back into a GMV number. And I would say the bummer at the moment is there's a fair amount of variance in the estimates, like different analysts have different models. So I have kind of been putting to a model of the models together and trying to kind of find a midpoint. And like Like based on that, the Amazon's GMV globally probably went up 11.5% for the quarter. So if you're comparing this to other retailers or the U.S. Department of Commerce number, overall GMV went up 11.5%. The U.S. was stronger. So the U.S. probably went up at 12.2%. So again, we talked about core retail was up 3.9%. Well, Amazon U.S. GMV was up 12.2%. So, you know, three times faster growth than the retail industry overall. Jason: [26:39] And again, Amazon is mostly e-commerce, very little brick and mortar, Jason: [26:44] which we'll talk about in just a minute. But even if you're comparing Amazon to that e-commerce number, if e-commerce comes in at 8% or 9% and Amazon's at 12%, they're by far the largest e-commerce player out there and they're still substantially outgrowing the average, which, you know, is very impressive and should be very scary to every other competitor out there. Jason: [27:08] One analyst kind of put together an estimate of what they thought the earned income contribution from Amazon was for retail and ads together, pulling AWS out. And they had it at $27 billion in earned income if Amazon was just a retail with no AWS. And that puts them right in the ballpark of Walmart that spent off about $29 billion in earned income or operating income. I keep saying earned, but I mean operating income. So, so that is all pretty impressive and simultaneously super scary. Jason: [27:45] Scott, did you drill down into the online segment at all? Scot: [27:49] Yeah. And, you know, what I would tell listeners is picture a block diagram where you have this big, big rectangle, that's the whole Amazon entity. And, you know, so what we're going to do is talk about the segments. And the first segment is the biggest one, which is the retail business. And that, that's what you just. Jason: [28:04] Biggest and best. Wouldn't you say? Scot: [28:06] Coolest. Jason: [28:07] Coolest. All right. Scot: [28:08] Cool. Okay. Yeah. Yeah. Okay. I'll, you know, I don't know. Jason: [28:11] It is for you. Scot: [28:14] Um, I think the whole enchilada, I like the, the way they do this and I'm trying to replicate it. It's 50. We'll talk about that in a second. The, so then the, you know, so then another segment is AWS, another segment, I think marketplace should be in some segment, but they don't break it out. So it's just kind of in kind of hidden inside of the blob that is retail. So we tease some of that out here on the show. They purposely hide it in there. So no one knows how awesome it is, I think. And then they've got AWS ads and a couple other things, but we'll talk about this. So as you dig into the retail business, there's a couple of ways to look at it. You can look at it by domestic and international, which Jason just did, Scot: [28:50] or you can look at it by online and physical store. So the online biz grew 7% year over year, which if I remember your stats, well, you don't have it until may 17th so on may 17th we'll be able to know how that compared but probably the one you can compare is the offline biz which is the the store comp that they have, And Jason, you saw on that one, what'd you see? Jason: [29:16] Yeah, so physical stores grew 6.3%. So again, like, you know, when we say all of retail grew 3.9%, a big chunk of that's e-commerce. Brick and mortar probably grew at like two to 3%. So Amazon's brick and mortar growing at 6.3% is actually super impressive. And it's kind of interesting, you know, for several years, Amazon has had experiments in a bunch of retail formats. So they've had these Amazon Go stores, stores. They had Amazon five-star stores. They had bookstores. They had a fashion store. They're trying all these things. And of course, the biggest chunk of their stores is they own Whole Foods. And so offline stores for Amazon was kind of a mix of all these different concepts. In the last couple of years, they've kind of cleaned house and gotten rid of all those concepts. And so, you know, nominally there's a few of their own grocery stores called Amazon Amazon fresh open, but the vast majority of online offline retail for Amazon is, is Whole Foods. And for it to be growing at 6.3% in the current climate is, is a really good sign for Amazon. And, and I would say somewhat impressive, you know, on the earnings call, they, they announced that they're working up a new format for Whole Foods, which is a smaller format store that's It's going to open in Manhattan. So I have that on my ticker file to go visit when that's open. Jason: [30:38] You know, the whole grocery space for Amazon is super interesting, but maybe we'll talk about that a little bit more later. But I will call out, they did launch a service that there's been some controversy over. They launched a $9.99 a month grocery delivery service, which essentially lets you have all you can eat free grocery delivery to your home for an incremental fee of $9.99. And they're spinning that as, you know, a cool new grocery service and enable more people to shop for groceries online. And there are a lot of articles about it, like. Jason: [31:13] They used to have free grocery delivery included in your Prime membership, right? And so they've kind of like, I look at the big arc of all this and say, there used to be a lot more free services in Prime that they've kind of peeled out. Then they started charging for, and now they'll let you get it free again for another $120 a year. Jason: [31:32] So interesting things happening with grocery that we could probably talk more about later. But I'm kind of eager to dive into some of these other businesses like AWS. Scot: [31:42] Yeah. So that's the one that everyone was really waiting on the call to hear how it went. And good news, AWS exceeded expectations. Everyone thought it was going to grow 14% and it came in at 17%. And if Wall Street likes, they like a lot of things, they like beating expectations, that's important to them. But their favorite thing is ARG. And that is not a pirate day thing, ARG. It is Accelerating Revenue Growth. Wall Street loves that more than anything. And that's what they delivered for both the ads and the AWS part of the business. And what that means is that as the law of numbers kicks in, so back on the retail business, the only time we see that accelerate is in the fourth quarter and that seasonal acceleration, right? We've gotten used to that for decades now. It always happens in the fourth quarter and whatnot. So it's what you would expect. But this is quite unusual for a relatively mature business. This thing's $25 billion a quarter. So this is a $100 billion business that accelerated. And so that tells us that there is a lot more wood to chop here. It has not gotten near its addressable market. And it really allayed fears that they were losing massive market share because they're, quote unquote, behind on AI to Azure, which is Microsoft offering, and then the Google hosting solution as well. Scot: [33:05] That does not seem to be the case. So they did very well. So they came in at $25 billion and Wall Street was expecting $24.6. So that was really, that accelerating is what really made everyone very happy. And then the operating income came in at $9.5, way ahead of Wall Street at $7.5. So another pretty material 20% beat on this component at the bottom line. And this is really interesting. There was some really good language around this. And this has been Jassy's statement all along, and it's coming true. His early Amazon's early play was we're going to be agnostic on models and it's kind of like bring your own model we'll work with anything now with open AI they're not going to ever host open AI but they'll they're not going to stop you from working with it and then they for these open source ones they've made it very easy for you to spin up an AWS instance throw a little llama in there and I would make a llama noise if I I knew what they said I guess they make like a sheep sound. So you throw a little alarm in there and it does its thing. And, you know, the benefit of them being agnostic on these LLMs is most likely they have some or all of your data, right? Because they've been at this so long that if you're doing cloud computing versus on-prem, most likely a lot of, if not all of your data is in AWS. Extracting that data, you know, imagine you had terabytes or or what's the biggest, Scot: [34:31] bigger than terabytes? I always forget this one. Jason: [34:33] Petabytes. Scot: [34:34] Petabytes of data at AWS. They literally have a product that they can send a truckload of hard drives around and get your data. That's how much data there is that you could never push it across the internet, that there's so much data. So if they have that data and that's what you want to train on, you don't want to have the latency of the internet between your data and the training. So you'd really need the LLM to operate near your data. And this is what they predicted two or three years ago, kind of around the, the, the launch of chat gpt when all this stuff really started to accelerate and it's coming true so everyone feels a lot better about that then their body language this time a lot of times they were kind of like this is what we're doing and we're pretty sure it's going to work now they're like it's working and people really felt relief around this because everyone there was a set of people that believed it but then you know open ai's pitches nope our lm is going to be we're spending, billions of dollars we're going to be so far ahead none of these open source things are going to keep up. If you don't have us, you're going to be so far behind, you'll be like playing with crayons and everyone's going to be playing with quill pens. Scot: [35:42] So it was really good to see that this is not what's happening, that people are embracing, enterprises are embracing these open source models. They are in the same zip code performance-wise from results and much cheaper than OpenAI's offerings. And what Amazon said specifically was very positive around what is It's kind of abbreviated Gen AI for generative AI. And it's kind of a way to encapsulate this. And they said that it already is a multi-billion dollar run rate business. And you always have to parse what they say. So multi-billion can be anywhere between 1 and 9.9, right? And you'll see why I drew 9.9 there. Scot: [36:25] And inside, as part of that big AWS number, and they believe it can be rapidly tens of billions. Billions so they're basically saying it's not double digit billions so it's a single digit million which is where i get one to nine point nine but they basically hinted that that it is growing so rapidly inside of there that it's gonna be tens of billions and this is why they saw accelerating revenue growth which made everyone happy it wasn't just people you know moving some more you know loads on or something boring loads around relational databases or something it was the juicy ai stuff so this got everyone so lathered up that three analysts did price increases and they cited that this was one of the reasons the biggest price increase was from sig susquehanna and they put the price up to 220. At the time all this happened the stock was at 175 and today it's around 185 so it's been up nicely but 220 is a pretty big big you know even. Scot: [37:20] From where they expect that's where they're thinking i think most these guys look at a year to two years as a time horizon on these prices so and that's the the high i have you know again there's a wide range some people think it's going to go down some people think it's over price so go do your research this is not a stock recommendation but i just thought it was interesting that people get really really excited by by this whole gen ai largely the body language that, and it's, Amazon doesn't pound their chest much. So the fact they were, was kind of a new, new way of managing Amazon and Jassy's pretty conservative. So he must've felt pretty good about it, but also that they needed to ally, allay, allay, allay, whatever the right word is, get rid of these competitive concerns everyone's been talking about. Jason: [38:05] Yeah. It feels like a pretty big prize out there. Jassy and the whole team always talk, Just AWS, even before you get to Gen AI, they always remind everyone, hey, 85% of the workloads are still on-prem. So like this, as big as AWS looks, if the long-term future is 85% of the workloads are on the cloud and only 15% are on-prem, there's a lot of headroom still in AWS. And then, you know, you add this new huge demand for AI on top of all that. And like this, it's almost a limitless opportunity. And I want to tie the AI back to retail, though, for just a second, because there's another bit of news that I haven't seen covered very much, but is super interesting to me. Jason: [38:51] There's a particular flavor of AI out there, a subset of generative AI that's now being called agentic AI. And that's sort of a clever amalgamation of agent-based AI. And there's a very famous AI researcher, this guy, Andrew Ng. He's the founder of Coursera. He's done a bunch of things. He was the head of Google Big Think, which was one of the first significant AI efforts. And I want to say he was like on People Magazine's 100 most interesting people list in like 2013 as an AI researcher. So the dude's been around for a long time. He is one of the biggest advocates for this agentic AI. And the premise is that if you just ask an LLM, you take the best LLM in the world, and you ask it to do something for you, that's called zero shot. You give it an assignment, and you take the first result you get. It's a zero shot. You get pretty good results. But if you... Jason: [39:53] Turn that, that LLM into multiple agents and break the task up amongst those agents and potentially agents even running on different LLMs, you get wildly better results. Jason: [40:05] And so his, his research kind of showed that, Hey, if, if Jason goes write a PowerPoint presentation for his client, explaining what's going on in commerce. And I just give that to the turbo version of ChatGBT 4, I'll get a pretty good deck. But if I say, hey, I want to create four agents. I want to create a consultant to write the deck and a copywriter to edit the deck and an editor to improve the deck and three people to pretend to be mock customers to poke holes in the deck and have all those agents work on this assignment. I could give that assignment to chat gbt 3.5 and it would actually output a better work product than the the newer more advanced model was by by breaking the job into these chunks and so in retail you think about like this is the idea of assigning higher level jobs to shopping right so instead of saying like going to amazon and saying oh now it's a ai-based search engine and i'm going to type a long form query into search and get a better result. Jason: [41:09] The agentic AI approach is I'm just going to say to Amazon, never let me run out of ingredients for my kids' school lunches. And the agent's going to figure out what is in my school lunches and what my use rate is for those things and what weeks I have off from school and don't need a school lunch. And it's just going to do all those things and magically have the food show up. And this is a long diatribe, but the reason it's relevant is is this dude, Andrew Ng, was named the newest board member at Amazon three weeks ago. Scot: [41:40] Very cool. Jason: [41:40] I did not see that myself. Yeah. And so if you're wondering where Amazon thinks this is going, like this, in my mind, ties all this tremendous opportunity in generative AI and the financial opportunity in AWS directly to the huge and growing retail business that Amazon runs. Scot: [42:02] Very cool. Oh yeah. I had not seen that. So maybe Wall Street picked up on that. I'm sure. And maybe that was another part of the excitement. Jason: [42:09] Yeah. But all of that is just peanuts compared to the real good business in Amazon, which is the ads business. So again, you know, Amazon used to, to obfuscate their ads business. They've for a number of quarters now had to report it as earnings because it's in their earnings separately, because it's so material. And it was another good quarter for the ads business. It's hard to say whether it's actually accelerating growth or not, because the ads business is very seasonal. So the ad business grew 24.3% for the quarter versus Q1 of 2023. Q4 grew faster. So Q4 grew at 27%, but the 24% growth is much faster growth than other... Q1 year-over-year growth rate. So however you slice it, it's a good, robust growth rate. If you add the last four quarters together, you get $29 billion worth of ad sales. There's lots of estimates for how profitable ad sales are, but there's no cost of goods for an ad, right? Jason: [43:13] And so it's very high margin. So if you just assume, I think 60% gross margins is a very conservative estimate. But if you assume 60% gross margins, that means the ad business spun off $29.5 billion of operating income over the last 12 months. And to put that in comparison, AWS is big and profitable as it is, twice as much revenue at over $100 billion now, but it spun off like $23 billion in operating income. So the ad business is a much more meaningful contributor to Amazon's profits than even AWS. Jason: [43:51] And another way I've been starting to think about this is what percentage of the total GMV on the Amazon platform are the ads? And they are now 6.5%. So that's a very significant new tax. You know, as Amazon has hundreds of millions of SKUs available for sale, no one's ever going to find your SKU or buy it if you don't do some marketing on the platform for that SKU. And that's this 6.5% tax that Amazon's charging. And in the same way we said, hey, AWS is a really robust business. And then there's this thing called generative AI that can make it even huger. All of this ad revenue we're talking about is really coming from their sponsored product listings, which is like basic search advertising on the retail platform. Last quarter, Amazon said, by the way, we have this huge viewership streaming video service called Amazon Prime. And we're going to start putting ads in the lowest tier version of Amazon Prime. So unless you want to pay more, you're going to start seeing ads on Amazon Prime. And that's another huge advertising opportunity that hasn't been very heavily tapped yet. So the analysts are pretty excited about the upside of Amazon potentially tacking on another $6.5 billion in Prime video ads onto the $50 billion of search ads that they already have. Jason: [45:11] And so ads are a pretty good business to be in, which is why every other retailer is trying to follow suit with their own sort of version of a retail media network. Scot: [45:22] Cool. I imagine you get a lot of calls to talk about that. Jason: [45:25] Oh, yeah. I actually, I'm sick of talking about it. So one nice thing about working at an ad agency is there are now thousands of other experts. You know, I was one of the early guys in retail media networks. Now there are thousands of other experts that are way more credible than me. So I don't have to talk about it quite as much, but it still, still comes up in every conversation. Scot: [45:43] Very cool. All right. So then that was the basic gist of the corridor from a high level. And then it came to the what's going on in Q2. So that did come in lighter than folks expected, as I said, and they guided the top line to 144 versus 149. Let's call it 146 and change at the midpoint. They always do this range kind of thing when they're doing their guide. And Wall Street was at 150 consensus. So, you know, a tidge below two or three percent below where they wanted. But the operating income guide was above Wall Street. So they're kind of, we'll take it. Como si, como sa. Scot: [46:21] So that was, you know, I think Amazon tapping things down. Yeah. Now they did talk a lot about consumers being under pressure. So they said in the, it wasn't in a Q and a, it was in the prepared remarks and Jassy said it, which is kind of like the more important stuff. And I will say it's really nice to have the CEO of Amazon back on these calls because Bezos basically ditched them after, I don't know if, I think he came the first two quarters back in 97 but i honestly can't remember but he has not gone to the calls and jassy's been to them all so it's really nice to hear from the ceo and he answers very candidly i feel you know he doesn't feel as kind of like robotic as many ceos when they get on here because it is a stressful thing that you're going to say something wrong, but there was this exchange well first of all he he in his prepared remarks he talked about. Scot: [47:12] I forgot to put the exact language, but he said, we're seeing a lot of consumers trade down. So they're seeing, you know, we're seeing this in the auto industry. Tires is this huge thing where it's under a lot of pressure right now because people are just waiting. So there's a lot of this, you know, it's not showing up in the data that I've seen, but there's, you know, maybe the inflation data, but not the GDP and some of the other unemployment data. But it feels like the consumer is under a bit of pressure here, and they talk about that a lot in the prepared remarks. So I thought our listeners would find that interesting. Jason, before I go into this longish little thing that I wanted to just cover, what do you, did you pick up on any of that consumer stuff? Are you hearing that? Jason: [47:55] Oh, yeah, that's very common. And remember, in the beginning, I mentioned that there's this weird bifurcation that some retailers, even in categories, are doing well and others aren't. And some categories are doing well and others aren't. That's super complicated to get to the why. But the most obvious why is that consumers feel like they're under a lot of economic pressure and are trading down and are deferring certain types of purchases. The easiest way to see this is own brands and private label sales going up and, you know, national brand sales stagnating, see things like chicken protein going up and beef protein going down. You know, there's lots of examples out there, but the retailers that are best able to follow the consumer as she trades down are tending to do well. And the retailers that only cater to the luxury consumer, the super luxury is still doing fine. They're somewhat insulated. But the folks that haven't been as able to cater to the value consumer as much have struggled more. And the non-mandatory categories have struggled more. So Andy's comments exactly mirror what we're seeing going on in market dynamics and what other retailers are saying in their earnings. It is slightly weird because if you just look at the macros. Jason: [49:18] It's objectively, the consumer is doing pretty well. There's actually a lot of favorable things, but there's a ton of evidence that the consumer sentiment is that they're really worried about their household budget and are making, you know, hard, hard financial decisions. Scot: [49:36] Yeah. Yeah. It's tough out there. Well, hopefully it'll get better. So one of the questions I want to just kind of pull out some tidbits, because this has been a theme on our pod for a long time and I thought it was really, really interesting. And this is going to get into the weeds of supply chain and this kind of thing. So sorry if that's not your jam. We like to talk about logistics. Scot: [49:56] Side note to you, Jason, I saw that deep dive we did on Amazon logistics is still like our number one show and all the stats and stuff, which is kind of fun. So someone cares about it. Anyway, one of the friends of the podcast, Yusuf Squally asked a question. He's one of the analysts and he said, as it relates to logistics, so he's talking to andy on the call back in september you launched amazon supply chain can you help us understand the opportunity you see there where are you in the journey to build logistics as a service on a global basis and does that require a huge increase in capex a function increase in capex which means huge so jesse said this was a very long answer so i'm going to pull out two snippets you can go read the transcripts can you put a link to that in the show notes absolutely yep yeah so so i'm just gonna give you the the snippet the whole thing is worth reading but it would be like another 20 minutes to do that. But so Jassy starts out and says, I think that it's interesting what's happening with the business we're building in third party logistics. And it's really kind of in some ways mirror some of the other businesses we've gotten involved in AWS being an example. And even though they're very different businesses, and that we realized that we had our own internal need to build and launch these capabilities. Scot: [51:01] We figured that there were probably others out there who had the same needs we did and decided to build the services out of them so this is this model that really blows the minds of traditional retailers where you know so walmart has this huge data you know capability there's this this urban legend that they know when people are pregnant before they do they can see changes in their habits or they know who all is on weight loss drugs they they see your buying habits so intricately that they can do that that's a neat capability but they view it as proprietary and And that's old school thinking. Scot: [51:32] What Amazon does is says, well, that's a cool capability. Let's certainly someone else needs it. Let's open it up. This is one of my favorite things at Amazon. And it's so counterintuitive that in my current car world, I talk about this and everyone's like, why are you, we're doing it a lot at Spiffy. And they're like, well, why are you doing that? That's like your proprietary thing. And we're like, well, that's just how it should be. And like, this is a better way to do it. And it's really interesting that still today, Amazon's built what I say, $100 billion business out of AWS, which has used this and people are, are befuzzled by the whole thing. So I, I thought that was an interesting use case. And then he, he goes into some details there that are pretty obvious for our listeners, like how this is gonna work. But then he basically kind of brings it back around and then he says he wraps up and says, I would say that supply chain with Amazon is really an abstraction on top of each individual block services. And in those services, he talked about all the things that, that, you know, FBA and last mile delivery and buy with a prime. He talks about each of those kind of and how awesome they are. So he's basically saying Amazon supply chain wraps a bow around all that. And it gives this collective set of business services is growing significantly. Scot: [52:43] It's already what I would consider a reasonable size business. I think it's early days. It's not something we anticipate being a giant capital expense driver. So it's because they've already invested in all this that doesn't require additional capex. And then he finishes and says, we have to build a lot of the capabilities anyway to handle our own business. And we think it will be a modest increase on top of that to accommodate third-party sellers. Scot: [53:05] But our, there's a typo in the thing. Our third-party sellers find very high value in us being able to manage these components for them versus having to do it themselves. And they save money in the process. So I thought that was a really interesting, interesting. So they're really leaning into this supply chain. I think that ultimately they'll open this up to more consumers where you can send Aunt Gertrude in Detroit something from Chicago for three bucks a package and just throw it in an Amazon box, maybe a return box, and it kind of makes it way cheaper than you can FedEx it. I think that's coming, but it's really interesting to see. The way they think about things and his articulation of it was very crisp, Scot: [53:45] and I really enjoyed that. I was geeking out on that when I was listening to the call. Jason: [53:50] Yeah, for sure. That actually came up in some of the conferences I was at that he, you know, Jeff Bezos famously wrote this memo a long time ago about kind of being an object oriented, company and having all these building blocks that people could easily access and use internally and externally. And, and that this was kind of Andy Jassy doubling down on that. Yeah. It's Biffy is an example of that. Like you inventing some cool products that make it your jobs easier. And then you're selling those products to, to your potential competitors. Scot: [54:20] Yeah. So two examples, we have some devices we've developed for ourselves. One is a tire tread scanner. So it does 2D and 3D tires, tire tread scans. It's called Easy Tread. And we developed it for ourselves because we touch 3,000 cars a day right now and we wanted to measure the tire treads. And the state of the art is a Bluetooth needle. And it's, you know, you have to lay on your back. The cars are on the ground for us most of the time. So you have to like get underneath there, measure three things, and then it Bluetooths to a phone. Then you have to take it, the data entry, it doesn't have an API. Then you have to like take what it measured and then now cut and paste it into something else. It's kind of, kind of redonkulous in our world. So we developed a solution for that and we're selling it externally. And then the big, the big one is from day one, this has been the plan is we've built a ton of software for Spiffy. So we're, you know, we've got 400 technicians, 250 vans doing all kinds of services across the US and there's no operating system for that. So we, there's no like Salesforce for that or Shopify. So we had to go build our own. And so we've built, you know, route optimization specific to this parts integration, fitment integration, VIN lookup, all these things that are required integration with tire suppliers, oil filter suppliers, oil suppliers, parts suppliers, all these things. So we have like 150 things we've integrated with and pulled in from all over the place. Scot: [55:44] And then labor management, all the reporting that comes along with it, all that stuff. And we're starting to license that out as its own platform to anyone that wants to do auto services. And so these dealerships and large auto service companies are coming to us and finally saying, this seems kind of obvious now that we need to provide the ability to go to our customers. They call it at their curb. They use a different language than we do. But basically what you and I would call mobile, you know, last mile delivery of the service. And we're starting to license that out. And it's a lot like AWS, right? So we had to build this for our retail business, which is doing the services and now we're licensing it out a lot AWS and we have this device business. So it's been, I would not have, it comes intuitively to me now. Cause I've been, you know, basically living this lifestyle for 20 years and watching Amazon do it, But it's been fun to kind of build a company with this mindset of we're going to take these things we build and give them to other, not give them, but sell them to other people. And then that makes them better. And they help us pay for all the R&D that we've done on it. Jason: [56:48] Yeah, that's very cool. And that gives listeners a very tangible example of why we haven't been able to podcast quite as frequently as we'd like. Scot: [56:56] Yes. Jason: [56:56] I do, at the risk of making this the world's longest episode of our show, I do have a geeky add-on to the supply chain conversation. Yeah. So a lot of these services that they're adding to specifically what they call supply chain with Amazon are around importing services, because an increasingly high percentage of all the stuff Amazon sells is. Jason: [57:20] Amazon is taking care of importing it, right? And most often from China, but from all over the world and taking care of all that logistics and getting it ready to sell and deliver via the world's most impressive last mile to consumers in America. And there's tons of complicated, high friction touch points and processes to flow all those goods. Well, the big competitors out there to Amazon at the moment that we've talked about ad nauseum on the show, like Shein and Timu, had this kind of direct from China model where they're putting all the goods on 747s, flying them over, and they're taking advantage of this loophole in the postal treaty called the de minimis provision to not pay taxes or duties or have all these goods inspected that they ship into the U.S. and U.S. Jason: [58:07] Businesses have been complaining it's unfair. There's like all kinds of talk about it. We've done shows on this and I'm sure we'll do others. So here's the new thing in supply chain. Jason: [58:15] All the people that have been complaining about this are now doing it because guess what's happened? A bunch of these companies have been born that now help every other brand in the world take advantage of the de minimis provisions to near shore their goods. So you're a footwear manufacturer, you make your shoes in Vietnam, Instead of shipping them to the U.S. On a pallet and paying taxes and duties, you ship them on a pallet to Mexico, and then you send them individual parcels across the border from Mexico into the U.S. and never have to pay taxes or duties on the stuff. So I don't know if that will last in the long run, but that's a very disruptive, significant change happening in the whole world of e-commerce supply chains as we speak. That's pretty interesting. Interesting. Had you gotten wind of that yet? Scot: [59:07] No, no. That's all new to me. Thanks for sharing. Jason: [59:09] Yeah. That's probably how you're going to have to start getting your spiffy stuff into the country now too. I won't, I won't, we won't go there. But the one other piece that did not come up in the earnings call, but a controversy around Amazon since our last show is news articles came out that Amazon was de-installing its Just Walk Out technology from its grocery stores. So Amazon had built Just Walk Out into several of these Amazon Fresh stores and they built it into Whole Foods. And if you know the history of Just Walk Out, this was the original intention of Just Walk Out was was to do it for grocery stor

TechCrunch Startups – Spoken Edition
Dropbox, Figma CEOs back Lamini, a startup building a generative AI platform for enterprises

TechCrunch Startups – Spoken Edition

Play Episode Listen Later May 7, 2024 7:32


Lamini, a new startup with funding from Andrew Ng, has emerged from stealth with a generative AI platform aimed at enterprises. Learn more about your ad choices. Visit podcastchoices.com/adchoices

This Is Robotics: Radio News
This Is Robotics: Radio News #29 (April 2024)

This Is Robotics: Radio News

Play Episode Listen Later Apr 25, 2024 23:41


Has Code Writing Capitulated To GenAI?What exactly just took place, and why?Suddenly this March, we all woke up one morning to find code fighting for its life. Why so fast? Why so suddenly? Why so completely? Unexpectedly and quietly code is disappearing. Why is that? Is AI's argument that convincing? Sure seems that way. It was a little like the Berlin Wall: imposingly there for a few decades, then suddenly gone and forgotten.We'll take a look at what happened to code, and what's next for robotics. Don't despair. The remedy is good!In early 2023, U.S. tech industries cut more than 190,000 employees from the workforce. Tens of thousands were coders. Tens of thousands of individuals who spent billions of dollars to learn how to code, so that they could get a “good” job."The new philosophy calls all into doubt," wrote the poet John Donne over 400 years ago. Indeed, GenAI's prompt engineering has done just that.Prompt engineering in AI is the process of designing and refining prompts—questions or instructions—which are at the heart of some of the most advanced AI applications…and growing.Join us as experts Andrew Ng, Stephen Wolfram, and Michael Welsh walk us through the new world of GenAI and the unparalleled opportunities that await for those who don't wait.See also:Did AI Just Free Humanity from Code?What About You? A Primer to Combat GenAI AnxietyExperts on AI & Robot Convergence for 2040 

Let's Talk AI
#163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

Let's Talk AI

Play Episode Listen Later Apr 24, 2024 93:54


Our 163rd episode with a summary and discussion of last week's big AI news! Note: apology for this one coming out a few days late, got delayed in editing it -Andrey Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Timestamps + links: Intro / Banter Tools & Apps (00:02:16) Meta releases Llama 3, claims it's among the best open models available (00:14:01) Elon Musk's xAI Unveils Grok-1.5 Vision, Beats OpenAI's GPT-4V (00:17:55) Reka releases Reka Core, its multimodal language model to rival GPT-4 and Claude 3 Opus (00:21:50) Cohere Compass Private Beta: A New Multi-Aspect Embedding Model (00:23:48) Amazon Music's Maestro lets listeners make AI playlists (00:24:36) Snap plans to add watermarks to images created with its AI-powered tools Applications & Business (00:25:52) Boston Dynamics unveils new Atlas robot for commercial use (00:30:32) TSMC's $65 billion bet still leaves US missing piece of chip puzzle (00:36:30) U.S. blacklists Intel's and Nvidia's key partner in China — three other Chinese firms also included in the blacklist for helping the military (00:38:37) Elon Musk says the next-generation Grok 3 model will require 100,000 Nvidia H100 GPUs to train (00:40:22) Dr. Andrew Ng appointed to Amazon's Board of Directors (00:41:55) Collaborative Robotics Locks Up $100M, Latest Robot Startup To Raise Big Projects & Open Source (00:44:08) OpenEQA: Embodied Question Answering in the Era of Foundation Models (00:50:03) Introducing Idefics2: A Powerful 8B Vision-Language Model for the community Research & Advancements (00:51:21) RHO-1: Not All Tokens Are What You Need (00:57:21) Scaling Laws for Fine-Grained Mixture of Experts (01:03:20) Chinchilla Scaling: A replication attempt (01:07:18) China develops new light-based chiplet that could power artificial general intelligence — where AI is smarter than humans (01:10:45) OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments Policy & Safety (01:13:44) U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team (01:17:18) NSA Publishes Guidance for Strengthening AI System Security (01:19:19) Foundational Challenges in Assuring Alignment and Safety of Large Language Models (01:24:11) Former OpenAI Board Member Calls for Audits of Top AI Companies (01:27:35) SoA survey reveals a third of translators and quarter of illustrators losing work to AI Synthetic Media & Art (01:30:25) Medium bans AI-generated content from its paid Partner Program

Writers, Ink
Vampires in Alaska with bestselling author, CJ Tudor.

Writers, Ink

Play Episode Listen Later Apr 22, 2024 63:51


Join hosts JD Barker, Christine Daigle, Jena Brown and Kevin Tumlinson as they discuss the week's entertainment news, including how the U.K.'s biggest publishers are using AI, Amazon adding Andrew Ng, and Amazon putting all three Claude AI models on Bedrock. Then, stick around for a chat with author CJ Tudor! C. J. Tudor's love of writing, especially the dark and macabre, started young. When her peers were reading Judy Blume, she was devouring Stephen King and James Herbert.Over the years she has had a variety of jobs, including trainee reporter, radio scriptwriter, dog walker, voiceover artist, television presenter, copywriter and, now, author. Her first novel, The Chalk Man, was a Sunday Times bestseller, has sold in over forty countries and will be developed into a six-part drama with BBC Studios Production. Her second novel, The Taking of Annie Thorne, was also a Sunday Times bestseller as was her third novel The Other People. Her fourth novel, The Burning Girls, was a Richard and Judy Book Club selection and is being adapted for Paramount+ by award-winning screenwriter Hans Rosenfeldt, creator of The Bridge and Marcella.She lives in Sussex, England with her family. --- Support this podcast: https://podcasters.spotify.com/pod/show/writersink/support

Ground Truths
Daphne Koller: The Convergence of A.I. and Digital Biology

Ground Truths

Play Episode Listen Later Mar 10, 2024 35:16


Transcript Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths and I am absolutely thrilled to welcome Daphne Koller, the founder and CEO of insitro, and a person who I've been wanting to meet for some time. Finally, we converged so welcome, Daphne.Daphne Koller (00:21):Thank you Eric. And it's a pleasure to finally meet you as well.Eric Topol (00:24):Yeah, I mean you have been rocking everybody over the years with elected to the National Academy of Engineering and Science and right at the interface of life science and computer science and in my view, there's hardly anyone I can imagine who's doing so much at that interface. I wanted to first start with your meeting in Davos last month because I kind of figured we start broad AI rather than starting to get into what you're doing these days. And you had a really interesting panel [←transcript] with Yann LeCun, Andrew Ng and Kai-Fu Lee and others, and I wanted to get your impression about that and also kind of the general sense. I mean AI is just moving it at speed, that is just crazy stuff. What were your thoughts about that panel just last month, where are we?Video link for the WEF PanelDaphne Koller (01:25):I think we've been living on an exponential curve for multiple decades and the thing about exponential curves is they are very misleading things. In the early stages people basically take the line between whatever we were last year, and this year and they interpolate linearly, and they say, God, things are moving so slowly. Then as the exponential curve starts to pick up, it becomes more and more evident that things are moving faster, but it's still people interpolate linearly and it's only when things really hit that inflection point that people realize that even with the linear interpolation where we'll be next year is just mind blowing. And if you realize that you're on that exponential curve where we will be next year is just totally unanticipatable. I think what we started to discuss in that panel was, are we in fact on an exponential curve? What are the rate limiting factors that may or may not enable that curve to continue specifically availability of data and what it would take to make that curve available in areas outside of the speech, whatever natural language, large language models that exist today and go far beyond that, which is what you would need to have these be applicable to areas such as biology and medicine.Daphne Koller (02:47):And so that was kind of the message to my mind from the panel.Eric Topol (02:53):And there was some differences in opinion, of course Yann can be a little strong and I think it was good to see that you're challenging on some things and how there is this “world view” of AI and how, I guess where we go from here. As you mentioned in the area of life science, there already had been before large language models hit stride, so much progress particularly in imaging cells, subcellular, I mean rare cells, I mean just stuff that was just without any labeling, without fluorescein, just amazing stuff. And then now it's gone into another level. So as we get into that, just before I do that, I want to ask you about this convergence story. Jensen Huang, I'm sure you heard his quote about biology as the opportunity to be engineering, not science. I'm sure if I understand, not science, but what about this convergence? Because it is quite extraordinary to see two fields coming together moving at such high velocity."Biology has the opportunity to be engineering not science. When something becomes engineering not science it becomes...exponentially improving, it can compound on the benefits of previous years." -Jensen Huang, NVIDIA.Daphne Koller (04:08):So, a quote that I will replace Jensen's or will propose a replacement for Jensen's quote, which is one that many people have articulated, is that math is to physics as machine learning is to biology. It is a mathematical foundation that allows you to take something that up until that point had been kind of mysterious and fuzzy and almost magical and create a formal foundation for it. Now physics, especially Newtonian physics, is simple enough that math is the right foundation to capture what goes on in a lot of physics. Biology as an evolved natural system is so complex that you can't articulate a mathematical model for that de novo. You need to actually let the data speak and then let machine learning find the patterns in those data and really help us create a predictability, if you will, for biological systems that you can start to ask what if questions, what would happen if we perturb the system in this way?The ConvergenceDaphne Koller (05:17):How would it react? We're nowhere close to being able to answer those questions reliably today, but as you feed a machine learning system more and more data, hopefully it'll become capable of making those predictions. And in order to do that, and this is where it comes to this convergence of these two disciplines, the fodder, the foundation for all of machine learning is having enough data to feed the beast. The miracle of the convergence that we're seeing is that over the last 10, 15 years, maybe 20 years in biology, we've been on a similar, albeit somewhat slower exponential curve of data generation in biology where we are turning it into a quantitative discipline from something that is entirely observational qualitative, which is where it started, to something that becomes much more quantitative and broad based in how we measure biology. And so those measurements, the tools that life scientists and bioengineers have developed that allow us to measure biological systems is what produces that fodder, that energy that you can then feed into the machine learning models so that they can start making predictions.Eric Topol (06:32):Yeah, well I think the number of layers of data no less what's in these layers is quite extraordinary. So some years ago when all the single cell sequencing was started, I said, well, that's kind of academic interest and now the field of spatial omics has exploded. And I wonder how you see the feeding the beast here. It's at every level. It's not just the cell level subcellular and single cell nuclei sequencing single cell epigenomics, and then you go all the way to these other layers of data. I know you plug into the human patient side as well as it could be images, it could be past slides, it could be the outcomes and treatments and on and on and on. I mean, so when you think about multimodal AI, has anybody really done that yet?Daphne Koller (07:30):I think that there are certainly beginnings of multimodal AI and we have started to see some of the benefits of the convergence of say, imaging and omics. And I will give an example from some of the work that we've recently distributed on a preprint server work that we did at insitro, which took imaging data from standard histopathology slides, H&E slides and aligned them with simple bulk RNA-Seq taken from those same tumor samples. And what we find is that by training models that translate from one to the other, specifically from the imaging to the omics, you're able to, for a fairly large fraction of genes, make very accurate predictions of gene expression levels by looking at the histopath images alone. And in fact, because many of the predictions are made at the tile level, not at the entire slide level, even though the omics was captured in bulk, you're able to spatially resolve the signal and get kind of like a pseudo spatial biology just by making predictions from the H&E image into these omic modalities.Multimodal A.I. and Life ScienceDaphne Koller (08:44):So there are I think beginnings of multimodality, but in order to get to multimodality, you really need to train on at least some data where the two modalities are simultaneously. And so at this point, I think the rate limiting factor is more a matter of data acquisition for training the models. It is for building the models themselves. And so that's where I think things like spatial biology, which I think like you are very excited about, are one of the places where we can really start to capture these paired modalities and get to some of those multimodal capabilities.Eric Topol (09:23):Yeah, I wanted to ask you because I mean spatial temporal is so perfect. It is two modes, and you have as the preprint you refer to and you see things like electronic health records in genomics, electronic health records in medical images. The most we've done is getting two modes of data together. And the question is as this data starts to really accrue, do we need new models to work with it or do you actually foresee that that is not a limiting step?Daphne Koller (09:57):So I think currently data availability is the most significant rate limiting step. The nice thing about modern day machine learning is that it really is structured as a set of building blocks that you can start to put together in different ways for different situations. And so, do we have the exact right models available to us today for these multimodal systems? Probably not, but do we have the right building blocks that if we creatively put them together from what has already been deployed in other settings? Probably, yes. So of course there's still a model exploration to be done and a lot of creativity in how these building blocks should be put together, but I think we have the tools available to solve these problems. What we really need is first I think a really significant data acquisition effort. And the other thing that we need, which is also something that has been a priority for us at insitro, is the right mix of people to be put together so that you can, because what happens is if you take a bunch of even extremely talented and sophisticated machine learning scientists and say, solve a biological problem, here's a dataset, they don't know what questions to ask and oftentimes end up asking questions that might be kind of interesting from machine learning perspective, but don't really answer fundamental biology questions.Daphne Koller (11:16):And conversely, you can take biologists and say, hey, what would you have machine learning do? And they will tell you, well, in our work we do A to B to C to D, and B to C is kind of painful, like counting nuclei is really painful, so can we have the machine do that for us? And it's kind of like that. Yeah, but that's boring. So what you get if you put them in a room together and actually get to the point where they communicate with each other effectively, is that not only do you get better solutions, you get better problems. I think that's really the crux of making progress here besides data is the culture and the people.A.I. and Drug DiscoveryEric Topol (11:54):Well, I'm sure you've assembled that at insitro knowing you, and I mean people tend to forget it's about the people, it's not about the models or even the data when you have all that. Now you've been onto drug discovery paths, there's at least 20 drugs that are AI driven that are in the clinic in phase one or two at some point. Obviously these are not only ones that you've been working on, but do you see this whole field now going into high gear because of this? Or is that the fact that there's all these AI companies partnering with big pharma? Is it a lot of nice agreements that are drawn up with multimillion dollar milestones or is this real?Daphne Koller (12:47):So there's a number of different layers to your question. First of all, let me start by saying that I find the notion of AI driven drugs to be a bit of a weird concept because over time most drugs will have some element of AI in them. I mean, even some of the earlier work used data science in many cases. So where do you draw the boundary? I mean, we're not going to be in a world anytime soon where AI starts out with, oh, I need to work on ALS and at the end there is a clinical trial design ready to be submitted to the FDA without anything, any human intervention in the middle. So, it's always going to be an interplay between a machine and a human with over time more and more capabilities I think being taken on by the machine, but I think inevitably a partnership for a long time to come.Daphne Koller (13:41):But coming to the second part of your question, is this real? Every big pharma has gotten to the point today that they realize they need some of that AI thing that's going around. The level of sophistication of how they incorporate that and their willingness to make some of the hard decisions of, well, if we're going to be doing this with AI, it means we shouldn't be doing it the old way anymore and we need to make a big dramatic internal shift that I think depends very much on the specific company. And some companies have more willingness to take those very big steps than others, so will some companies be able to make the adjustment? Probably. Will all of them? Probably not. I would say however, that in this new world there is also room for companies to emerge that are, if you will, AI native.Daphne Koller (14:39):And we've seen that in every technological revolution that the native companies that were born in the new age move faster, incorporate the technology much more deeply into every aspect of their work, and they end up being dominant players if not the dominant player in that new world. And you could look at the internet revolution and think back to Google did not emerge from the yellow pages. Netflix did not emerge from blockbuster, Amazon did not emerge from Walmart so some of those incumbents did make the adjustment and are still around, some did not and are no longer around. And I think the same thing will happen with drug discovery and development where there will be a new crop of leading companies to I think maybe together with some of the incumbents that we're able to make the adjustment.Eric Topol (15:36):Yeah, I think your point there is essential, and another part of this story is that a lot of people don't realize there's so many nodes of ways that AI can facilitate this whole process. I mean from the elemental data mining that identified Baricitinib for Covid and now being used even for many other indications, repurposing that to how to simulate for clinical trials and everything in between. Now, what seems like because of your incredible knack and this convergence, I mean your middle name is like convergence really, you are working at the level of really in my view, this unique aspect of bringing cells and all the other layers of data together to amp things up. Is that a fair assessment of where insitro in your efforts are directed?Three BucketsDaphne Koller (16:38):So first of all, maybe it's useful to kind of create the high level map and the simplest version I've heard is where you divide the process into three major buckets. One is what you think of as biology discovery, which is the discovery of new therapeutic hypotheses. Basically, if you modulate this target in this group of humans, you will end up affecting this clinical outcome. That's the first third. The middle third is, okay, well now we need to turn that hypothesis into an actual molecule that does that. So basically generating molecules. And then finally there's the enablement and acceleration of the clinical development process, which is the final third. Most companies in the AI space have really focused in on that middle third because it is well-defined, you know when you've succeeded if someone gives you a target and what's called a target product profile (TPP) at the end of whatever, two, three years, whether you've been able to create a molecule that achieves the appropriate properties of selectivity and solubility and all those other things. The first third is where a lot of the mistakes currently happen in drug discovery and development. Most drugs that go into the clinic don't fail because we didn't have the right molecule. I mean that happens, but it's not the most common failure mode. The most common failure mode is that the target was just a wrong target for this disease in this patient population.Daphne Koller (18:09):So the real focus of us, the core of who we are as a company is on that early third of let's make sure we're going after the right clinical hypotheses. Now with that, obviously we need to make molecules and some of those molecules we make in-house, and obviously we use machine learning to do that as well. And then the last third is we discover that if you have the right therapeutic hypothesis, which includes which is the right patient population, that can also accelerate and enable your clinical trials, so we end up doing some of that as well. But the core of what we believe is the failure mode of drug discovery and what it's going to take to move it to the next level is the articulation of therapeutic hypotheses that actually translate into clinical outcome. And so in order to do that, we've put together, to your point about convergence, two very distinct types of data.Daphne Koller (19:04):One is data that we print in our own internal data factory where we have this incredible set of capabilities that uses stem cells and CRISPR and microscopy and single cell measurements and spatial biology and all that to generate massive amounts of in-house data. And then because ultimately you care not about curing cells, you care about curing people, you also need to bring in the clinical data. And again, here also we look at multiple high content data modalities, imaging and omics, and of course human genetics, which is one of the few sources of ground truth for causality that is available in medicine and really bring all those different data modalities across these two different scales together to come up with what we believe are truly high quality therapeutic hypotheses that we then advance into the clinic.AlphaFold2, the ExemplarEric Topol (19:56):Yeah, no, I think that's an extraordinary approach. It's a bold, ambitious one, but at least it is getting to the root of what is needed. One of the things you mentioned of course, is the coming up with molecules, and I wanted to get your comments about the AlphaFold2 world and the ability to not just design proteins now of course that are not extant proteins, but it isn't just proteins, it could be antibodies, it could be peptides and small molecules. How much does that contribute to your perspective?Daphne Koller (20:37):So first of all, let me say that I consider the AlphaFold story across its incarnations to be one of the best examples of the hypothesis that we set out trying to achieve or trying to prove, which is if you feed a machine learning model enough data, it will learn to do amazing things. And the space of protein folding is one of those areas where there has been enough data in biology that is the sequence to structure mapping is something that over the years, because it's so consistent across different cells, across different species even, we have a lot of data of sequence to structure, which is what enabled AlphaFold to be successful. Now since then, of course, they've taken it to a whole new level. I think what we are currently able to do with protein-based therapeutics is entirely sort of a consequence of that line of development. Whether that same line of development is also going to unlock other therapeutic modalities such as small molecules where the amount of data is unfortunately much less abundant and often locked away in the bowels of big pharma companies that are not eager to share.Daphne Koller (21:57):I think that question remains. I have not yet seen that same level of performance in de novo design of small molecule therapeutics because of the data availability limitations. Now people have a lot of creative ideas about that. We use DNA encoded libraries as a way of generating data at scale for small molecules. Others have used other approaches including active learning and pre-training and all sorts of approaches like that. We're still waiting, I think for a truly convincing demonstration that you can get to that same level of de novo design in small molecules as you can in protein therapeutics. Now as to how that affects us, I'm so excited about this development because our focus, as I mentioned, is the discovery of novel therapeutic hypotheses. You then need to turn those therapeutic hypotheses into actual molecules that do the work. We know we're not going to be the expert in every single therapeutic modality from small molecules to macro cycles, to the proteins to mRNA, siRNA, there's so many of those that you need to have therapeutic modality experts in each of those modalities that can then as you discover a target that you want to modulate, you can basically go and ask what is the right partner to help turn this into an actual therapeutic intervention?Daphne Koller (23:28):And we've already had some conversations with some modality partners as we like to call them that help us take some of our hypotheses and turn it into molecules. They often are very hungry for new targets because they oftentimes kind of like, okay, here's the three or four or whatever, five low hanging fruits that our technology uniquely unlocks. But then once you get past those well validated targets like, okay, what's next? Am I just going to go read a bunch of papers and hope for the best? And so oftentimes they're looking for new hypotheses and we're looking for partners to make molecules. It's a great partnership.Can We Slow the Aging Process?Eric Topol (24:07):Oh yeah, no question about that. Now, we've seen in recent times some leaps in drugs that were worked on for decades, like the GLP-1s for obesity, which are having effects potentially well beyond obesity didn't require any AI, but just slogging away at it for decades. And you previously were at Calico, which is trying to deal with aging. Do you think that we're going to see drug interventions that are going to slow the aging process because of this unique time of this exponential point we are in where we're a computer and science and digital biology come together?Daphne Koller (24:52):So I think the GLP-1s are an incredible achievement. And I would point out, I know you said and incorrectly that it didn't use any AI, but they did actually use an understanding of human genetics. And I think human genetics and the genotype phenotype statistical associations that they revealed is in some ways the biological precursor to AI it is a way of leveraging very large amounts of data, admittedly using simpler statistical tools, but still to discover in a data-driven way, novel therapeutic hypothesis. So I consider the work that we do to be a progeny of the kind of work that statistical geneticists have done. And of course a lot of heavy lifting needed to be done after that in order to make a drug that actually worked and kudos to the leaders in that space. In terms of the modulation of aging, I mean aging is a process of decline over time, and the rate of that decline is definitely something that is modifiable.Daphne Koller (26:07):And we all know that external factors such as lifestyle, diet, exercise, even exposure to sun or smoking, accelerates the aging process. And you could easily imagine, as we've seen in the GLP-1s that a therapeutic intervention can change that trajectory. So will we be able to using therapeutic interventions, increase health span so that we live healthy longer? I think the answer to that is undoubtedly, yes. And we've seen that consistently with therapeutic interventions, not even just the GLP-1s, but going backwards, I mean even statins and earlier things. Will we be able to increase the maximum life span so that people habitually live past 120, 150? I don't know. I don't know that anybody knows the answer to that question. I personally would be quite happy with increasing my health span so that at the age of 80, I'm still able to actively go hiking and scuba diving at 90 and 100 and that would be a pretty good place to start.Eric Topol (27:25):Well, I'm with you on that, but I just want to ask though, because the drugs we have today that are highly effective, I mean statins is a good example. They work at a particular level of the body. They don't have across the board modulation of effect. And I guess what I was asking is, do you foresee we will have some way to do that across all systems? I mean, that is getting to, now that we have so many different ways to intervene on the process, is there a way that you envision in the future that we'll be able to here, I'm not talking about in expanding lifespan, I'm talking about promoting health, whether it's the immune system or whether it's through mitochondria and mTOR, caloric, I mean all these different things you think that's conceivable or is that just, I mean companies like Calico and others have been chasing this. What do you think?Daphne Koller (28:30):Again, I think it's a thing that is hard to predict. I mean, we know that different organ systems age at different rates, and is there a single bio even in a single individual, and it's been well established that you can test brain age versus muscle health versus cardiovascular, and they can be quite different in the same individual, so is there a single hub? No, that governs all forms of aging. I don't know if that's true. I think it's oftentimes different. We know protein folding has an effect, you know DNA damage has an effect. That's why our skin ages because it's exposed to sun. Is there going to be a single switch that reverts it all back? Certainly some companies are pursuing that single bullet approach. I personally would probably say that based on the biology that I've seen, there's at least as much potential in trying to find ways to slow the decline in a way that specific to say as we discussed the immune system or correcting protein, misfolding dysfunction or things like that. And I'm not dismissing there is a single magic switch, but let's just say I think we should be exploring multiple alternatives.Eric Topol (29:58):Yeah, no, I like your reasoning. I think it's actually like everything else you said here. It makes a lot of sense. The logic is hard to argue with. Well, I think what you're doing there at insitro is remarkable and it seems to be quite distinct from other strategies, and that's not at all surprising knowing your background and your aspiration.Daphne Koller (30:27):Never like to follow the crowd. It's boring.Eric Topol (30:30):Right, and I do know you left an aging directed company effort at Calico to do what you're doing. So that must have been an opening for you that you saw was much more diverse perhaps, or maybe I'm mistaken that Calico is not really age specific in its goals.Daphne Koller (30:49):So what inspired me to go found insitro was the realization that we are making medicines today in a way that is not that different from the way in which we were making medicines 20 or 30 years ago in terms of the process by which we go from a, here's what I want to work on to here's a drug is a very much an artisanal one-off each one of them is a snowflake. There is very little commonality and sharing of insights and infrastructure across those efforts except in relatively limited tool-based ways. And I wanted to change that. I wanted to take the tools of engineering and data and machine learning and build a very different approach of going from a problem definition to a therapeutic intervention. And it didn't make sense to build that within a company that's focused on any single biology, not just aging because it is such a broad-based foundation.Daphne Koller (31:58):And I will tell you that I think we are on the path to building the thing that I set out to build. And as one example of that, I will use the work that we've recently done in metabolic disease where based on the foundations that we've built using both the clinical machine learning work and the cellular machine learning work, we were able to go from a problem articulation of this is the indication that we want to work on to a proof of concept in a translatable animal model in one year. That is pretty unusual. Admittedly, this is with an SiRNA tool compound. Nice thing about things that are liver directed is that it's not that difficult of a path to go from an SiRNA tool compound to an actual SiRNA drug. And so hopefully that's a fairly linear journey from there even, which is great.Daphne Koller (32:51):But the fact that we were able to go from problem articulation to a proof of concept in a translatable animal model in one year, that is unusual. And we're starting to see that now across our other therapeutic areas. It takes a long time to build a platform because you're basically building a foundation. It's like, okay, where's the fruit of all of that? I mean, you're building and building and building and nothing comes out for a while because you're building so much of the infrastructure. But once you've built it, you turn the crank and stuff starts to come out, you turn the crank again, and it works faster and better than the previous time. And so the essence of what we've built and what has turned into the tagline for the company is what we call pipeline through platform, which is we're building a pipeline of therapeutic interventions that comes off of a platform. And that's rare in biopharma, the only platform companies that really have emerged by and larger therapeutic modality platforms, things like Moderna and Alnylam, which have gotten really good at a particular modality and that's awesome. We're building a discovery platform and that is a fairly unusual thing.Eric Topol (34:02):Right. Well, I have no doubt you'll be discovering a lot of important things. That one sounds like it could be a big impact on NASH.Daphne Koller (34:14):Yeah, we hope so.Eric Topol (34:14):A big unmet need that's not going to be fixed by what we have today. So Daphne, it's really a joy to talk with you and palpable enthusiasm for where the field is going as one of its real leaders and we'll be cheering for you. I hope we'll reconnect in the times ahead to get another progress report because you're definitely rocking it there and you've got a lot of great ideas for how to change the life science medical world of the future.Daphne Koller (34:48):Thank you so much. It's a pleasure to meet you, and it's a long and difficult journey, but I think we're on the right path, so looking forward to seeing that all that pan out.Eric Topol (34:58):You made a compelling case in a short visit, so thank you.Daphne Koller (35:02):Thank you so much.Thanks for your subscription and listening/reading these posts.All content on Ground Truths—newsletter analyses and podcasts—is free.Voluntary paid subscriptions all go to support Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe

כל תכני עושים היסטוריה
בינה המלאכותית - האם הבאזז מוצדק? [פורצים דרך]

כל תכני עושים היסטוריה

Play Episode Listen Later Mar 3, 2024 32:45


אנו חיים בזמנים מדהימים: זוהי הפעם ראשונה בהיסטוריה שבה במקום שבני אדם ידברו עם מחשב בשפת מחשב, המחשב מדבר עם בני אדם בשפה אנושית. התוצאה: מגוון עצום של אפשרויות שלא היו לנו בעבר.הפעם, בפורמט חדש, ד"ר יובל דרור בשיחה עם רמי סגל, מנהל קבוצת מוצר בסיילספורס - על האתגרים והיתרונות של הבינה המלאכותית.האם שילוב בינה מלאכותית הוא תהליך אבולוציוני שהולך ומתקדם או הייפ חולף? כיצד כניסת הבינה המלאכותית משפיעה על שלושת היסודות של החברה שלכם - שיווק, מכירות ושירות? וגם: כיצד הכל מתכנס אל תוך מילה אחת קטנה-גדולה: דאטה.רמי ממליץ: לקרוא את "חוק החמש שניות" מאת מל רובינס, לעקוב אחר Andrew NG והאתר שלו deeplearning.ai ואחר חברת הרובוטים Boston Dynamics.

WSJ Tech News Briefing
How Artificial Intelligence Is Changing Work

WSJ Tech News Briefing

Play Episode Listen Later Feb 21, 2024 12:41


Artificial intelligence is already changing work across industries. Companies are looking to AI tools to help them operate more efficiently, and in some cases, increasing automation in their workforces. But as workers upskill and reskill how can they—and the companies that employ them—keep up with the pace of innovation? Andrew Ng, managing general partner of AI Fund, spoke with WSJ global tech editor Jason Dean at the WSJ CIO Network Summit about how AI will affect the labor force, and which companies are positioning themselves the best in the AI market. Learn more about your ad choices. Visit megaphone.fm/adchoices

In AI We Trust?
Andrew Ng: Should we fear an AI-driven existential crisis?

In AI We Trust?

Play Episode Listen Later Jan 24, 2024 43:52


Join us this week with AI-pioneer, Andrew Ng (Founder of DeepLearning.AI, Landing AI, Coursera, General Partner at the AI Found, adjunct professor at Stanford University) as we discuss the likelihood of AI's existential threat, the merits of regulation, the transformative power of generative AI, and the need for greater AI literacy.―Resources mentioned in this episode:Written Statement of Andrew Ng Before the U.S. Senate Insight Forum

Bio Eats World
Past, Present, and Future of AI with Vijay Pande

Bio Eats World

Play Episode Listen Later Jan 9, 2024 39:18


Bio Eats World is now Raising Health!Vijay Pande, founding partner of Bio + Health, is joined by Daphne Koller, Andrew Ng, Aviv Regev, and Jakob Uszkoreit.Vijay leads us on a reflective journey through the monumental achievements in AI from the 1980s to today, with a focus on the progress in healthcare and life sciences. This episode is drawn from the episodes we recorded in 2023 with Daphne Koller, Andrew Ng, Aviv Regev, and Jakob Uszkoreit, which are linked below.This blend of expert commentary and firsthand insights explores the burgeoning impact of AI on healthcare innovation and how it's reshaping the future of health.AI and Actionable Insights for Drug Development with Daphne KollerNavigating the Future of AI with Andrew NgWhen Quantity Becomes Quality with Aviv RegevUsing AI to Take Bio Farther with Jakob Uszkoreit

Azeem Azhar's Exponential View
AI Is Transforming Businesses (with Andrew Ng)

Azeem Azhar's Exponential View

Play Episode Listen Later Dec 20, 2023 28:20


Organizations across the world have been grappling with the opportunities and challenges of generative AI. In this episode, Azeem Azhar joins AI pioneer and entrepreneur Andrew Ng to debate whether we're at an inflection point in the AI revolution.

The Stephen Wolfram Podcast
Science & Technology Q&A for Kids (and others) [April 14, 2023]

The Stephen Wolfram Podcast

Play Episode Listen Later Dec 15, 2023 82:15


Stephen Wolfram answers general questions from his viewers about science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include:I've been hearing of AI and LLMs in context of an "arms race" between countries. What do LLMs look like scaled up in that manner (vs. a global LLM)? - What about model interoperability? Where are we at on the research for that? Do we need to develop new and more sophisticated mathematics to begin to understand these black box models? Do you think in time we will be able to do casual inference with them? - Do you agree with Yann LeCunn and Andrew Ng that recent affirmation that AGI is still decades away and cannot be achieved with the current transformer architectures, regardless of parameter and token count? - Where is the line then between a program with an inner experience and one without? - So with unlimited intelligence, maybe everything can be predicted with accuracy. - When will an AI write a work worth feeding into another AI?

Crucible Moments
Nvidia ft. Jensen Huang - An overnight success story 30 years in the making

Crucible Moments

Play Episode Listen Later Nov 30, 2023 38:53


CEO Jensen Huang tells the legendary story of Nvidia, from the company's early days pioneering 3D graphics cards for a niche PC gaming market to powering the AI revolution as the sixth most valuable company in the world. Nvidia faced multiple near-death experiences along the way, and their so-called “diving catches,” as Jensen calls them, were some of the most dramatic business stories of the modern era.   Host: Roelof Botha, Sequoia Capital  Featuring: Jensen Huang, Chris Malachowsky, Andrew Ng, Mark A. Stevens, Alfred Lin,  Transcript: https://www.sequoiacap.com/podcast/crucible-moments-nvidia/ Learn more about your ad choices. Visit podcastchoices.com/adchoices

In Depth
The Bard blueprint | Creating value, shipping fast, and advancing AI ethically | Jack Krawczyk (Google)

In Depth

Play Episode Listen Later Nov 30, 2023 83:47


Jack Krawczyk is a Senior Director of Product at Google, building Bard. Bard is Google's collaborative, conversational, and experimental AI tool that's bridging the gap between humans and bots, while addressing ethical considerations around AI. After joining the project in 2020, Jack helped ship Bard in less than four years. Bard sources information directly from the web, and now enables users to inquire about and summarize YouTube videos. — In today's episode, we discuss: Key lessons from Bard's development process Ethics in AI How Bard shipped fast What separates Bard from competitors The future of LLM, Generative AI, and AGI Advice for aspiring AI developers — Referenced: Bard: https://bard.google.com/ ChatGPT: https://chat.openai.com/ Duet AI: https://cloud.google.com/duet-ai Free courses on machine learning by Andrew Ng: https://www.andrewng.org/courses/ Google Assistant: https://assistant.google.com/ Introducing Google Assistant to Bard: https://blog.google/products/assistant/google-assistant-bard-generative-ai/ Large Language Model (LLM): https://en.wikipedia.org/wiki/Large_language_model Meena: https://blog.research.google/2020/01/towards-conversational-agent-that-can.html Sissie Hsiao (GM at Bard): https://www.linkedin.com/in/sissie-hsiao-b24243/ Steve Stoute: https://www.linkedin.com/in/stevestoute/ UnitedMasters: https://unitedmasters.com/ — Where to find Jack Krawczyk: Twitter/X: https://twitter.com/JackK LinkedIn: https://www.linkedin.com/in/jack--k — Where to find Brett Berson: Twitter/X: https://twitter.com/brettberson LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ — Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter: https://twitter.com/firstround Youtube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast — Timestamps: (00:00) Introduction (02:17) Bard's origin story (03:54) Deciding on the application of Bard (05:59) The ethical considerations around building Bard (10:19) Why Bard launched to the public so early (13:30) Risk-taking at big companies versus smaller ones (16:20) Bard's early user research (21:21) Bard versus ChatGPT (25:01) The cultural and product principles behind Bard (30:56) Insight into Bard's impressive development speed (35:17) Deciding when to ship Bard (41:41) Why Bard is different from other products Jack has built (46:30) Evaluating Bard's original spec (48:02) Insight into Bard's product roadmap (56:00) The toughest challenges Bard has faced (57:50) What's special about team-building at Bard (62:54) Addressing Bard's negative press (67:49) Advice for aspiring LLM companies (69:15) Advice for non-LLM companies (71:05) The biggest barriers to advancing AI (75:45) How product people can use or build with AI (77:24) How AI is changing product leadership (79:20) People who had an outsized impact on Jack

Let's Know Things
Regulating AI

Let's Know Things

Play Episode Listen Later Nov 7, 2023 20:37


This week we talk about regulatory capture, Open AI, and Biden's executive order.We also discuss the UK's AI safety summit, open source AI models, and flogging fear. Recommended Book: The Resisters by Gish JenTranscriptRegulatory capture refers to the corruption of a regulatory body by entities to which the regulations that body creates and enforces, apply.So an organization that wants to see less funding for public schools and more for private and home schooling options getting one of their people into a position at the Department of Education, or someone from Goldman Sachs or another, similar financial institution getting shoehorned into a position at the Federal Reserve, could—through some lenses at least, and depending on how many connections those people in those positions have to those other, affiliated, ideological and commercial institutions—could be construed as engaging in regulatory capture, because they're now able to control the levers of regulation that apply to their own business or industry, or their peers, the folks they previously worked with and people to whom they maybe owe favors, or vice versa, and that could lead to regulations that are more favorable to them and their preferred causes, and those of their fellow travelers.This is in contrast to regulatory bodies that apply limits to such businesses and organizations, figuring out where they might overstep or lock in their own power at the expense of the industry in which they operate, and slowly, over time, plugging loopholes, finding instances of not-quite-illegal misdeeds that nonetheless lead to negative outcomes, and generally being the entity in charge in spaces that might otherwise be dominated by just one or two businesses that can kill off all their competition and make things worse for consumers and workers.Often, rather than regulatory capture being a matter of one person from a group insinuating themselves into the relevant regulatory body, the regulatory body, itself, will ask representatives from the industry they regulate to help them make law, because, ostensibly at least, those regulatees should know the business better than anyone else, and in helping to create their own constraints—again, ostensibly—they should be more willing to play by the rules, because they helped develop the rules to which they're meant to abide, and probably helped develop rules that they can live with and thrive under; because most regulators aren't trying to kill ambition or innovation or profit, they're just trying to prevent abuses and monopolistic hoarding.This sort of capture has taken many shapes over the years, and occurred at many scales.In the late-19th century, for instance, railroad tycoons petitioned the US government for regulation to help them bypass a clutter of state-level regulations that were making it difficult and expensive for them to do business, and in doing so—in asking to be regulated and helping the federal government develop the applicable regulations—they were able to make their own lives easier, while also creating what was effectively a cartel for themselves with the blessing of the government that regulated their power; the industry as it existed when those regulations were signed into law, was basically locked into place, in such a way that no new competitors could practically arise.Similar efforts have been launched, at times quite successfully, by entities in the energy space, across various aspects of the financial world, and in just about every other industry you can imagine, from motorcyclists' protective clothing to cheerleading competitions to aviation and its many facets—all have been to some degree and at some point allegedly regulatorily captured so that those being regulated to some degree control the regulations under which they operate, and which as a consequence has at times allowed them to create constraints that benefit them and entrench their own power, rather than opening their industry up and increasing competition, safety, and the treatment and benefits afforded to customers and workers, as is generally the intended outcome of these regulations.What I'd like to talk about today is the burgeoning world of artificial intelligence and why some players in this space are being accused of attempting the time-tested tactic of regulatory capture at a pivotal moment of AI development and deployment.—At the tail-end of October, 2023, US President Biden announced that he was signing a fairly expansive executive order on AI: the first of its kind, and reportedly the first step toward still-greater and more concrete regulation.A poll conducted by the AI Policy Institute suggests that Americans are generally in favor of this sort of regulatory move, weighing in at 68% in favor of the initiative, which is a really solid in-favor number, especially at a moment as politically divided as this one, and most of the companies working in this space—at least at a large enough scale to show up on the map for AI at this point—seem to be in favor of this executive order, as well, with some caveats that I'll get to in a bit.That indicates the government probably got things pretty close to where they need to be, in terms of folks actually adhering to these rules, though it's important to note that part of why there's such broad acceptance of the tenets of this order is that there aren't any real teeth to these rules: it's largely voluntary stuff, and mostly only applies to the anticipated next generation of AI—the current generation isn't powerful enough to fall under its auspices, in most cases, so AI companies don't need to do much of anything yet to adhere to these standards, and when they eventually do need to do something to remain in accordance with them, it'll mostly be providing reports to government employees so they can keep tabs on developments, including those happening behind close doors, in this space.Now that is not nothing: at the moment, this industry is essentially a black box as far as would-be regulators are concerned, so simply providing a process by which companies working on advanced AI and AI applications can keep the government informed on their efforts is a big step that raises visibility from 0 to some meaningful level.It also provides mechanisms through which such entities can get funding from the government, and pathways through which international AI experts can come to the United States with less friction than would be the case for folks without that expertise.So AI industry entities generally like all this because it's easy for them to work with, is flexible enough not to punish them if they fail in some regard, but it also provides them with more resources, both monetary and human, and sets the US up, in many ways, to maintain its current purported AI dominance well into the future, despite essentially everyone—especially but not exclusively China—investing a whole lot to catch up and surpass the US in the coming years.Another response to this order, though, and the regulatory infrastructure it creates, was voiced by the founder of Google Brain, Andrew Ng, who has been working on AI systems and applications for a long time, and who basically says that some of the biggest players in AI, today, are playing up the idea that artificial intelligence systems might be dangerous, even to the point of being world-ending, because they hope to create exactly this kind of regulatory framework at this exact moment, because right now they are the kings of the AI ecosystem, and they're hoping to lock that influence in, denying easy access to any future competitors.This theory is predicated on that concept I mentioned in the intro, regulatory capture, and history is rich with examples of folks in positions of power in various spaces telling their governments to put their industry on lockdown, and making the case for why this is necessary, because they know, in doing so, their position at the top will probably be locked in, because it will become more difficult and expensive and thus, out of reach, for any newer, smaller, not already influential and powerful competitor, to then challenge them moving forward.One way this might manifest in the AI space, according to Ng, is through the licensing of powerful AI models—essentially saying if you want to use the more powerful AI systems for your product or research, you need to register with the government, and you need to buy access, basically, from one of these government-sanctioned providers. Only then will we allow you to play in this potentially dangerous space with these highest-end AI models.This, in turn, would substantially reduce innovation, as other entities wouldn't be able to legally evolve their AI in different directions, at least not at a high level, and it would make today's behemoths—the OpenAI's and Meta's of the world—all but invulnerable to future challenges, because their models would be the ones made available to everyone else to use; no one else could compete, not practically, at least.This would be not-great for smaller, upstart AI companies, but it would be especially detrimental to open source large language models—versions of the most popular, LLM-based AI systems that're open to the public to mess around with and use however they see fit, rather than being controlled and sold by a single company.These models would be unlikely to have the resources or governing body necessary to step into the position of regulator-approved moderator of potentially dangerous AI systems, and the open source credo doesn't really play well with that kind of setup to begin with, as the idea is that all the code is open and available to take and use and change, so locking it down at all would violate those principles; and this sort of regulatory approach would be all about the lockdown, on fears of bad actors getting their hands on high-end AI systems—fears that have been flogged by entities like OpenAI.So that collection of fears are potentially fueling the relatively fast-moving regulatory developments related to AI in the US, right now; regulation, by the way, that's typically slower-moving in the US, which is part of why this is so notable.This is not a US-exclusive concern, though, nor is this executive order the only big, new regulatory effort in this space.At a summit in the UK just days after the US executive order was announced, AI companies from around the world, and those who govern such entities, met up to discuss the potential national security risks inherent in artificial intelligence tools, and to sign a legally non-binding agreement to let their governments test their newest, most powerful models for risks before they're released to the public.The US participated in this summit, as well, and a lot of these new rules overlap with each other, as the executive order shares a lot of tenets with the agreement signed at that meeting in the UK—though the EO was US-specific and included non-security elements, as well, and that will be the case for laws and orders passed in the many different countries to which these sorts of global concerns apply, each with their own approach to implementing those more broadly agreed-upon specifics at the national level.This summit announced the creation of a international panel of experts who will publish an annual report on the state of the art within the AI space, especially as it applies to national security risks, like misinformation and cybersecurity issues, and when questioned about whether the UK should take things a step further, locking some of these ideas and rules into place and making them legal requirements rather than things corporations agree to do but aren't punished for not doing, the Prime Minister, Rishi Sunak said, in essence, that this sort of thing takes time; and that's a sentiment that's been echoed by many other lawmakers and by people within this industry, as well.We know there need to be stricter and more enforceable regulations in this space, but because of where we are with this collection of technologies and the culture and rules and applications of them, right now, we don't really know what laws would make the most sense, in other words.No nation wants to tie its own hands in developing increasingly useful and powerful AI tools, and moving too fast on the concrete versions of these sort of agreements could end up doing exactly that; there's no way to know what the best rules and regulations will be, yet, because we're standing at the precipice of what looks like a long journey toward a bunch of new discoveries and applications.That's why the US executive order is set up the way it is, too: Biden and his advisors don't want to slow down the development in this space within the US, they want to amplify it, while also providing some foundational structure for whatever they decide needs to be built next—but those next-step decisions will be shaped by how these technologies and industries evolve over the next few years.The US and other countries are also setting up agencies and institutes and all sorts of safety precautions related to this space, but most of them lack substance at this point, and as with the aforementioned regulations, these agency setups are primarily just first draft guide rails, if that, at this point.Notably, the EU seems to be orienting around somewhat sterner regulations, but they haven't been able to agree on anything concrete quite yet, so despite typically taking the lead on this sort of thing, the US is a little bit ahead of the EU in terms of AI regulation right now—though it's likely that when the EU does finally put something into place, it'll be harder-core than what the US has, currently.A few analysts in this space have argued that these new regulations—lightweight as they are, both on the global and US level—by definition will hobble innovation because regulations tend to do that: they're opinionated about what's important and what's not, and that then shapes the direction makers in the regulated space will tend go.There's also a chance that, as I mentioned before, that this set of regulations laid out in this way, will lock the power of incumbent AI companies into place, protecting them from future competitors, and in doing so also killing off a lot of the forces of innovation that would otherwise lead to unpredictable sorts of outcomes.One big question, then, is how light a touch these initial regulations will actually end up having, how the AI and adjacent industries will reshape themselves to account for these and predicted future regulations, and to what degree open source alternatives—and other third-party alternatives, beyond the current incumbents—will be able to step in and take market share, nudging things in different directions, and potentially either then being incorporated into and shaping those future, more toothy regulations, or halting the deployment of those regulations by showing that the current direction of regulatory development no longer makes sense.We'll also see how burdensome the testing and other security-related requirements in these initial rules end up being, as there's a chance more attention and resources will shift toward lighter-weight, less technically powerful, but more useful and deployable versions of these current AI tools, which is already something that many entities are experimenting with, because that comes with other benefits, like being able to run AI on devices like a smartphone, without needing to connect, through the internet, to a huge server somewhere.Refocusing on smaller models could also allow some developers and companies to move a lot faster than their more powerful but plodding and regulatorily hobbled kin, rewiring the industry in their favor, rather than toward those who are currently expected to dominate this space for the foreseeable future.Show NotesOn the EOhttps://www.aijobstracker.com/ai-executive-orderReactions to EOhttps://archive.ph/RdpLhhttps://theaipi.org/poll-biden-ai-executive-order-10-30/https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html?ref=readtangle.comhttps://qz.com/does-anyone-not-like-bidens-new-guidelines-on-ai-1850974346https://archive.ph/wwRXjhttps://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnzhttps://twitter.com/ylecun/status/1718670073391378694?utm_source=substack&utm_medium=emailhttps://stratechery.com/2023/attenuating-innovation-ai/First take on EOWhat EO means for openness in AIBiden's regulation planshttps://www.reuters.com/technology/eu-lawmakers-face-struggle-reach-agreement-ai-rules-sources-2023-10-23/https://archive.ph/IwLZuhttps://techcrunch.com/2023/11/01/politicians-commit-to-collaborate-to-tackle-ai-safety-us-launches-safety-institute/https://indianexpress.com/article/explained/explained-sci-tech/on-ai-regulation-the-us-steals-a-march-over-europe-amid-the-uks-showpiece-summit-9015032/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

Geek News Central
Big Tech & AI: Power Play or Genuine Concern? #1702

Geek News Central

Play Episode Listen Later Oct 30, 2023 43:40 Transcription Available


Todd breaks down four AI articles, leading with an article by Andrew Ng, the co-founder of Google Brain, lifting the veil on the strategies employed by Big Tech giants in the realm of artificial intelligence. He posits that some of these companies might be inflating the risks associated with AI. But what's the endgame? According … Continue reading Big Tech & AI: Power Play or Genuine Concern? #1702 → The post Big Tech & AI: Power Play or Genuine Concern? #1702 appeared first on Geek News Central.

Bio Eats World
Navigating the Future of AI with Andrew Ng

Bio Eats World

Play Episode Listen Later Sep 19, 2023 31:29


Andrew Ng, PhD, a distinguished authority in the field of AI, is known for founding DeepLearning.AI and multiple other ventures. He also co-founded and led Google Brain and serves as an Adjunct Professor in Stanford University's Computer Science Department. In this episode, he is joined by Vijay Pande, founding partner of a16z Bio + Health.Andrew has thought deeply about the implications of integrating AI into many areas of our lives, going so far as to put out a public social media call for people who believe AI is dangerous to speak with him. He and Vijay discussed this, as well as how AI could become foundational to many industries — and what needs to happen to make that future a reality.

Honestly with Bari Weiss
AI With Sam Altman: The End of The World? Or The Dawn of a New One?

Honestly with Bari Weiss

Play Episode Listen Later Apr 27, 2023 69:55


Just six months ago, few outside of Silicon Valley had heard of OpenAI, the company that makes the artificial intelligence chatbot ChatGPT. Now, this application is used daily by over 100 million users, and some of those people use it more often than Google. Within just months of its release, it has become the fastest-growing app in history. ChatGPT can write essays and code. It can ace the bar exam, write poems and song lyrics, and summarize emails. It can give advice, scour the internet for information, and diagnose an illness given a set of blood results, all in a matter of seconds. And all of the responses it generates are eerily similar to those of an actual human being.  For many people, it feels like we're on the brink of something world-changing. That the technology that powers ChatGPT, and the emergent AI revolution more broadly, will be the most critical and rapid societal transformation in the history of the world to date. If that sounds like hyperbole, don't take it from me: Google's CEO Sundar Pichai said AI's impact will be more profound than the discovery of fire. Computer scientist and Coursera co-founder Andrew Ng said AI is the new electricity. Some say it's the new printing press. Others say it's more like the invention of the wheel, or the airplane. Many predict the AI revolution will make the internet seem like a small step. And just last month, The Atlantic ran a story comparing AI to nuclear weapons. But there's a flip side to all of this optimism, and it's a dark one. Many smart people believe that AI could make human beings obsolete. Thousands of brilliant technologists—people like Elon Musk and Steve Wozniak—are so concerned about this software that last month they called for an immediate pause on training any AI systems more powerful than the current version of ChatGPT. One of the pioneers of AI, Eliezer Yudkowsky, claims that if AI continues on its current trajectory, it will destroy life on Earth as we know it. He recently wrote, “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” Which is it? Is AI the end of the world? Or the dawn of a new one? To answer that question for us today: Sam Altman. Sam is the co-founder and CEO of OpenAI, the company that makes ChatGPT, which makes him arguably one of the most powerful people in Silicon Valley, and if you believe the hype about AI, the world. I ask him: is the technology that powers ChatGPT going to fundamentally transform life on Earth as we know it? In what ways? How will AI affect our basic humanity, our jobs, our understanding of intelligence, our relationships? And are the people in charge of this powerful technology, people like himself, ready for the responsibility? Learn more about your ad choices. Visit megaphone.fm/adchoices

Moonshots with Peter Diamandis
EP #39 Should We Be Fearful of Artificial Intelligence? The AI Panel w/ Emad Mostaque, Alexandr Wang, and Andrew Ng

Moonshots with Peter Diamandis

Play Episode Listen Later Apr 20, 2023 49:14


In this Ask Me Anything session during this year's Abundance360 summit, Andrew, Emad, Alexandr, and Peter discuss how the world will change post-A.I explosion,  including how to reinvent your business, your skills, and more.  You will learn about: 03:39 | How Do We Educate On New Technologies In Our Changing World? 15:22 | Is There Any Industry That AI Will Never Disrupt? 28:27 | Will The First Trillionaire Be Born From The Power Of AI? Emad Mostaque is the Founder and CEO of Stability AI, the company behind Stable Diffusion. Alexandr Wang is the world's youngest self-made billionaire at 24 and is the Founder and CEO of Scale AI. Andrew Ng is the Founder of DeepLearning.AI and the Founder & CEO of Landing AI.  > Try out Stable Diffusion > Visit Scale AI > Learn AI with DeepLearning.AI _____________ I only endorse products and services I personally use. To see what they are,  please support this podcast by checking out our sponsor:  Use my code MOONSHOTS for 25% off your first month's supply of Seed's DS-01® Daily Synbiotic: seed.com/moonshots Levels: Real-time feedback on how diet impacts your health. levels.link/peter  _____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now:  Tech Blog Join me on a 5-Star Platinum Longevity Trip at Abundance Platinum _____________ Connect With Peter: Twitter Instagram Youtube Moonshots and Mindsets Learn more about your ad choices. Visit megaphone.fm/adchoices

TED Talks Daily
How AI could empower any business | Andrew Ng

TED Talks Daily

Play Episode Listen Later Sep 27, 2022 11:13


Expensive to build and often needing highly skilled engineers to maintain, artificial intelligence systems generally only pay off for large tech companies with vast amounts of data. But what if your local pizza shop could use AI to predict which flavor would sell best each day of the week? Andrew Ng shares a vision for democratizing access to AI, empowering any business to make decisions that will increase their profit and productivity. Learn how we could build a richer society – all with just a few self-provided data points.