Podcasts about Firebase

Cloud computing and development platform by Google

  • 331PODCASTS
  • 658EPISODES
  • 47mAVG DURATION
  • 1WEEKLY EPISODE
  • May 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Firebase

Show all podcasts related to firebase

Latest podcast episodes about Firebase

Player: Engage
How AI Analytics Can Boost Your Player Retention with Josh Plotnek

Player: Engage

Play Episode Listen Later May 20, 2025 49:12


Episode SummaryIn this episode of Player Driven, host Greg engages in an insightful conversation with Josh Plotnek, Head of Content at Keewano, diving deep into the significance of actionable insights derived from game data analytics. The discussion highlights how understanding player behavior through data can drastically improve player experience and game performance. Josh shares practical tips for studios of all sizes, emphasizing how AI-driven analytics can help uncover hidden issues, transforming raw data into meaningful actions.Guest Information Name: Josh Plotnek Role: Head of Content at Keewano Expertise: Data Analytics, Content Strategy, Game Development Insights Company: KeewanoAbout KeewanoKeewano is a groundbreaking analytics platform that leverages advanced AI to provide actionable insights into player behavior, enabling gaming studios of any size to enhance player experience, retention, and engagement.Key Takeaways Data Alone Isn't Enough (07:35)Collecting data is the starting point; real value comes from understanding the "why" behind player behavior to make impactful decisions. Recognizing Frustration vs. Engagement (20:08)Player frustration can be either positive (engaging) or negative (leading to churn). Analyzing "recovery behaviors" helps studios differentiate and respond effectively. Start Small, Then Scale Your Analytics (14:39)Smaller studios can utilize accessible tools like Unity Analytics and Firebase, gradually scaling to more sophisticated AI-driven analytics solutions as they grow. Leveraging AI to Uncover Hidden Issues (11:27)Advanced analytics powered by AI can identify complex problems within games, such as pinpointing an item missed in earlier levels that causes significant churn later on. Future of Analytics: Conversational and Accessible (49:34)The future of data analytics is making complex insights conversational, allowing anyone on the team to ask direct questions and receive clear, actionable answers.Resources Mentioned Keewano Blog Unity Analytics FirebaseListen and LearnTune into the full episode to discover more about turning your game data into powerful insights and actionable strategies to enhance player satisfaction and loyalty.

devtools.fm
Stepan Parunashvili - InstantDB

devtools.fm

Play Episode Listen Later May 19, 2025 52:03


This week we're joined by Stepan Parunashvili, co-creator of InstantDB, a new database that's designed to make it easier to build local-first apps. Instant is a replacement for Firebase, and it's designed to be a more modern, more flexible, and more powerful database for the modern web. Join us as we dive into the details of InstantDB, the challenges of building a new database, and the future of local-first development.This episode is sponsored by WorkOS (https://workos.com)https://www.linkedin.com/in/stepan-parunashvili-65698932/https://www.instantdb.com/https://github.com/instantdb/instanthttps://github.com/stopachka

Entre Chaves
{Noticias ++} Gemini 2.5 e Firebase Studio: velocidade, custo e testes

Entre Chaves

Play Episode Listen Later May 13, 2025 47:01


Será que o Gemini 2.5 e o Firebase, ambos produtos do Google, já são confiáveis para projetos complexos ou ainda vale a pena esperar? Neste episódio, nossos hosts analisam as novidades do Gemini 2.5, principalmente para desenvolvimento de código, se comparado com outras LLMs do mercado e analisam as funções do Firebase Studio para publicar aplicações com IA. Dê o play e ouça agora!   Links importantes: Vagas disponíveis Newsletter Dúvidas? Nos mande pelo Linkedin Contato:  entrechaves@dtidigital.com.br O Entre Chaves é uma iniciativa da dti digital, uma empresa WPP

two & a half gamers
Why Casual Games Are Rethinking Interstitial Ads? Ad Monetization is Getting Smarter (Finally)

two & a half gamers

Play Episode Listen Later Apr 26, 2025 42:31


In this episode, we sit down with the team behind Airflux, the AI-powered ad optimizer from Airbridge, and unpack how it's quietly changing the rules of mobile game monetization.

The Measure Pod
#119 Google Cloud Next 25 roundup

The Measure Pod

Play Episode Listen Later Apr 18, 2025 60:23


Full show notes, transcript and AI chatbot - https://bit.ly/3Gg5HHZ Watch on YouTube - https://youtu.be/dcZhmVY_Bl0 00:00:00 - New co-host introduction. 00:04:01 - Google Next 25 conference highlights. 00:08:10 - CapEx spend on cloud and AI. 00:12:37 - Cross-cloud collaboration and flexibility. 00:15:40 - Gemini's integration in Firebase. 00:21:01 - Autonomous data AI platform. 00:25:10 - Data tools and data quality. 00:27:01 - Data quality challenges and solutions. 00:30:15 - Building with good foundations. 00:36:08 - Unstructured data in AI platforms. 00:40:10 - BigQuery as enterprise advantage. 00:42:56 - BigQuery vector search capabilities. 00:48:11 - Multi-agent systems and autonomy. 00:51:20 - Importance of robust data. 00:54:06 - BigQuery and unstructured data. 00:58:05 - Reducing repetitive work through automation. ----- Episode Summary: In this episode of The Measure Pod, Dara and Matthew take the reins and dive into the biggest takeaways from Google Cloud Next 2025. From shiny new features to subtle shifts in direction, they cover the bits that matter—what's exciting, what's useful, and what might actually change the way we work. Plenty of ground covered. Plenty of thoughts shared. And just the beginning of what's to come. ----- About The Measure Pod: The Measure Pod is your go-to fortnightly podcast hosted by seasoned analytics pros. Join Dara Fitzgerald (Co-Founder at Measurelab) & Matthew Hooson (Head of Engineering at Measurelab) as they dive into the world of data, analytics and measurement—with a side of fun. ----- If you liked this episode, don't forget to subscribe to The Measure Pod on your favourite podcast platform and leave us a review. Let's make sense of the analytics industry together! The post #119 Google Cloud Next 25 roundup appeared first on Measurelab.

Radio Raccoons
S07E08 - Over post-quantum encryption, GPT-chronologie en DolphinGemma

Radio Raccoons

Play Episode Listen Later Apr 17, 2025 91:21


De techwereld staat nooit stil, en wij ook niet. In deze aflevering van Radio Raccoons bespreken we de stroom aan nieuwe AI-modellen die de afgelopen weken zijn gelanceerd, waaronder GPT-4.1 (mini én nano), Google's Lyria, en Meta's Llama 4. Verder hebben we het over transparantie in taalmodellen dankzij OLMoTrace en het twijfelachtige redeneervermogen van Claude. In de deep dive verwelkomen we een speciale gast die zijn inzichten deelt over post-quantum encryption en FHE: Jan-Pieter d'Anvers - in samenwerking met CyberSec Europe. Ten slotte hebben we uiteraard ook een tool tip (vibe coding lovers, op naar FireBase!) en een watercooler show-off over communicatie met dolfijnen.Tech scoopsOpenAI continues naming chaos despite CEO acknowledging the habitSam Altman on Twitter / XMeta releases two Llama 4 AI modelsIronwood: The first Google TPU for the age of inferenceOLMoTrace: Tracing Language Model Outputs Back to Trillions of…Researchers concerned to find AI models misrepresenting their “reasoning” processesOpenAI wants Europe to build the infrastructure it needs to profit from European marketsGlazen bolMicrosoft has created an AI-generated version of QuakeDeep diveSchrijf je in voor CyberSec EuropeTool tipFirebase StudioWatercooler show-offGoogle created a new AI model for talking to dolphins

programmier.bar – der Podcast für App- und Webentwicklung
News 16/25: Firebase Studio // Zod 4 // CVE-Ende // AI Code Interviews

programmier.bar – der Podcast für App- und Webentwicklung

Play Episode Listen Later Apr 16, 2025 35:57


Nach unserem Special zur Google Cloud Next berichtet Dennis noch einmal detaillierter über die neue AI Cloud IDE von Google: Firebase Studio.Außerdem dürfen wir endlich wieder Fabi am Podcast Studio begrüßen und sind gespannt was er alles über den neuesten Release der Validation Library Zod zu berichten hat.Und wir müssen auch diese Woche wieder über die Ereignisse auf der anderen Seite des Atlantik sprechen. Denn die jüngsten Einsparmaßnahmen der US-Regierung haben jetzt dafür gesorgt, dass das bekannte CVE Programm quasi eingestellt wird. Auch beliebte Projekte wie Let's Encrypt sind betroffen.Von Dave erfahren wir diese Woche wie ein junger Entwickler mit einem selbst-gebauten AI Tool erst viele Job Angebote bei den großen Firmen im Silicon Valley erhalten hat und dennoch deswegen seinen Studienplatz verloren hat.Auch dieses Jahr verlosen wir zusammen mit WeAreDevelopers wieder Tickets für den WeAreDevelopers World Congress. Hört euch die Folge an, um zu erfahren, wie ihr teilnehmen könnt!Alle weiteren Details zu unserem Gewinnspiel findet ihr unter https://www.programmier.bar/gewinnspiel.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. BlueskyInstagramLinkedInMeetupYouTube

Hashtag Trending
Exploring AI-Generated Code: Vibe Coding and the Future of Software Development | Project Synapse

Hashtag Trending

Play Episode Listen Later Apr 12, 2025 67:08 Transcription Available


In this episode of Project Synapse, join our group of AI-obsessed IT professionals as they discuss the intriguing concept of AI-generated code, specifically focusing on 'vibe coding.' Marcel Gagne, a former system administrator turned author, dives deep into the history and potential of writing code through AI. The episode covers the evolution from early programming languages like COBOL and Fortran to modern AI coding tools like Cursor and Firebase. Discover how AI tools aid in prototyping, personal productivity, and the future possibilities in enterprise-level applications. The team also explores security implications, testing methodologies, and the importance of responsible AI use in development. Tune in to learn about the present and future impact of AI on programming and systems development. 00:00 Introduction to Project Synapse 00:36 Meet the Hosts 02:53 The Evolution of Programming Languages 08:27 The Rise of Vibe Coding 14:00 Practical Applications and Experiences 19:54 Advanced Tools and IDEs 22:39 Challenges and Solutions in AI Coding 29:56 Starting Fresh: The Importance of Context 33:04 Introduction to Programming by Kenny Rogers 33:28 Legendary Programmer John Carmack 33:38 AI in Game Development 34:19 Nostalgia for Classic Games 35:59 The Evolution of Game Engines 38:04 AI's Role in Modern Coding 38:36 Proof of Concept and Rapid Prototyping 44:20 Security Concerns with AI-Generated Code 50:06 The Future of AI in Enterprise Systems 01:00:27 The Importance of Testing and Security 01:03:44 Final Thoughts and Recommendations

Empower Apps
Full Stack Things with Werner Jainek and Vojtěch Rylko

Empower Apps

Play Episode Listen Later Mar 27, 2025 49:12


Werner Jainek and Vojtěch Rylko from Cultured Code talk about their migration of Things Cloud to Server Side with Swift and what they learned along the way.GuestThings - To-Do List for Mac & iOSThings (@things.app) — BlueskyThings (@culturedcode)Things (@things@mastodon.online)Werner Jainek (@jainek@mastodon.social)Vojtěch RylkoVojtech Rylko (@vry@mastodon.social)Vojtěch Rylko | LinkedInVojtěch Rylko (@vojtechrylko)vojtarylko (Vojtech Rylko)AnnouncementsJoin Bushel BetaJoin our Patreon!Newsletters | BrightDigitLinksSwift.org - How Swift's server support powers Things CloudThe Success Story of Server-Side Swift at Cultured Code - Vojtech Rylko - YouTubeRelated EpisodesSwift on Android with Marc Prud'hommeauxSwift, Server Side, Serverless with Sébastien StormacqFull Stack Lyriq with Adegboyega OlusunmadePixelBlitz in Public with Martin LasekSwiftly Tooling with Pol Piella AbadiaBackend Decisions with Mikaela CaronWhat is Firebase with Peter FrieseAWS and SOTO with Adam FowlerSocial MediaEmailleo@brightdigit.comGitHub - @brightdigitTwitter BrightDigit - @brightdigitLeo - @leogdionLinkedInBrightDigitLeoPatreon - brightdigitCreditsMusic from https://filmmusic.io"Blippy Trance" by Kevin MacLeod (https://incompetech.com)License: CC BY (http://creativecommons.org/licenses/by/4.0/) (00:00) - Overview of Cultured Code and Things App (02:19) - Migrating to Server-Side Swift (09:07) - Technical Challenges and Solutions (27:56) - Background Workers and Swift (32:11) - Swift 6 Adoption (36:34) - Chaos Testing and Deployment Thanks to our monthly supporters Tomáš Slíž Edward Sanchez Steven Lipton ★ Support this podcast on Patreon ★

Now in Android
114 - Google I/O 2025, Android Studio at 10, Android 16 Betas, and more!

Now in Android

Play Episode Listen Later Mar 21, 2025 6:27


Welcome to Now in Android, your ongoing guide to what's new and notable in the world of Android development. In this episode, we'll cover the return of Google I/O, Android Studio Turning 10, the Android 16 Betas, Imagen in Firebase, the latest in AndroidX, and more! For links to these items, check out Now in Android #114 on Medium → https://goo.gle/4hA69xv  Catch the latest episode of #TheAndroidShow here → https://goo.gle/tas-mar25  Watch more Now in Android → https://goo.gle/now-in-android  Subscribe to Android Developers → https://goo.gle/AndroidDevs  

Rocket Ship
#061 - Shipping Successful AI Apps with Your Average Tech Bro

Rocket Ship

Play Episode Listen Later Feb 25, 2025 54:26


In this conversation, Simon Grimm interviews Dohyun Kim, known as YourAverageTechBro, about his journey as an app developer and content creator. They discuss the challenges and successes in building apps, the importance of marketing, and the technologies used in app development, including React Native, Supabase, and AI tools. Dohyun shares insights on his most successful app, Montee, and the strategies behind its development and marketing, as well as the lessons learned from previous projects. In this conversation, Dohyun discusses the development of his app, Montee, focusing on the use of Next.js and Supabase for differentiation and backend management. He shares insights on API security, handling costs, and user management strategies. The importance of action bias in development is emphasized, along with ideation and keyword research strategies. The discussion also covers social media marketing tactics and preferences between web and mobile app development.Learn React Native - https://galaxies.devDohyun KimYouTube: https://www.youtube.com/@YourAverageTechBroTikTok: https://www.tiktok.com/@youraveragetechbroInstagram: https://www.instagram.com/youraveragetechbroX: https://x.com/youravgtechbroLinksMontee: https://www.montee.aiPerfect Interview: https://www.perfectinterview.aiGemini: https://ai.google.dev/TakeawaysDohyun prefers using technologies that allow for rapid development and shipping.He believes in copying successful ideas rather than focusing on originality.Montee, his AI meeting recorder app, achieved $1,500 in monthly recurring revenue shortly after launch.Dohyun discusses the challenges of app growth and the impact of churn on revenue.He highlights the importance of effective marketing strategies for app success.Dohyun prefers Supabase over Firebase for its relational database capabilities and better documentation.He shares insights on the technology stack used for PerfectInterview.ai, including Next.js and Gemini.Dohyun believes that app growth is often a series of step functions rather than exponential growth. Copy first and differentiate second is a key strategy.API keys should never be exposed in client-side code.User requests should always be traceable to prevent abuse.Action bias is crucial for shipping apps.Keyword research is not the only way to ideate apps.Social media marketing can drive app visibility.Instagram is currently more explosive for growth than TikTok.Web apps allow for faster updates and cash flow management.Developers should focus on building value-adding features.It's important to distinguish between fun projects and income-generating apps.

App Masters - App Marketing & App Store Optimization with Steve P. Young
The Best Alternative to Firebase Dynamic Links

App Masters - App Marketing & App Store Optimization with Steve P. Young

Play Episode Listen Later Feb 18, 2025 14:51


The Deep Linking Tool You Need for Seamless User Journeys!

Kodsnack
Kodsnack 630 - Jag får göra det själv, med Oskar Wahlbäck

Kodsnack

Play Episode Listen Later Feb 18, 2025 58:45


Fredrik snackar med Oskar Wahlbäck om att bygga och testa idéer, så snabbt och ofta som möjligt. Och med hjälp av språkmodeller, för att kunna få mer gjort snabbare utan att behöva dra in fler utvecklare. Språkmodeller har blivit en naturlig och viktig del av Oskars process, och han berättar hur han arbetar med och tänker kring det. Oskar berättar mycket om hur han jobbat med olika produkter och idéer, och hur han arbetar och tänker för att så snabbt som möjligt både se om en idé är bra utan också om den kan få några kunder. Att fråga mamma är, tyvärr, inte rätt väg framåt. Vad är du beredd att göra för att testa en idé? Var medveten om det, och anpassa därefter. Är det en skyldighet att göra något du faktiskt vill göra? Ett stort tack till Cloudnet som sponsrar vår VPS! Har du kommentarer, frågor eller tips? Vi är @kodsnack, @thieta, @krig, och @bjoreman på Mastodon, har en sida på Facebook och epostas på info@kodsnack.se om du vill skriva längre. Vi läser allt som skickas. Gillar du Kodsnack får du hemskt gärna recensera oss i iTunes! Du kan också stödja podden genom att ge oss en kaffe (eller två!) på Ko-fi, eller handla något i vår butik. Länkar Oskar Bokaklipp Way out west Beetroot Beetroot academy VBA - Visual basic for applications Firebase och Firestore Google cloud functions Cocurrency Transaktioner Stöd oss på Ko-fi Agil lokförare-klistermärket Glide Aquire - sida där man kan sälja tidiga startups och tjänster The mom test - bok Lean startup Flutterflow Fiverr Titlar Inte kodare från början En klassisk start Foodora fast i Burma Boka i kommentarerna Det hade inte ChatGPT heller tänkt på En app för dig själv Alla idéer kommer inte att funka Är det här etiskt? Du måste testa Våga börja Jag får göra det själv Göra i princip vad som helst Fallhöjden noll, för alla

Declarando Variables
Lanza tu app sin gastar ni un peso: te cuento cómo [#99]

Declarando Variables

Play Episode Listen Later Feb 11, 2025 25:53


¿Quieres lanzar tu app sin gastar ni un centavo?

Les Cast Codeurs Podcast
LCC 322 - Maaaaveeeeen 4 !

Les Cast Codeurs Podcast

Play Episode Listen Later Feb 9, 2025 77:13


Arnaud et Emmanuel discutent des nouvelles de ce mois. On y parle intégrité de JVM, fetch size de JDBC, MCP, de prompt engineering, de DeepSeek bien sûr mais aussi de Maven 4 et des proxy de répository Maven. Et d'autres choses encore, bonne lecture. Enregistré le 7 février 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-322.mp3 ou en vidéo sur YouTube. News Langages Les evolutions de la JVM pour augmenter l'intégrité https://inside.java/2025/01/03/evolving-default-integrity/ un article sur les raisons pour lesquelles les editeurs de frameworks et les utilisateurs s'arrachent les cheveux et vont continuer garantir l'integrite du code et des données en enlevant des APIs existantes historiquemnt agents dynamiques, setAccessible, Unsafe, JNI Article expliques les risques percus par les mainteneurs de la JVM Franchement c'est un peu leg sur les causes l'article, auto propagande JavaScript Temporal, enfin une API propre et moderne pour gérer les dates en JS https://developer.mozilla.org/en-US/blog/javascript-temporal-is-coming/ JavaScript Temporal est un nouvel objet conçu pour remplacer l'objet Date, qui présente des défauts. Il résout des problèmes tels que le manque de prise en charge des fuseaux horaires et la mutabilité. Temporal introduit des concepts tels que les instants, les heures civiles et les durées. Il fournit des classes pour gérer diverses représentations de date/heure, y compris celles qui tiennent compte du fuseau horaire et celles qui n'en tiennent pas compte. Temporal simplifie l'utilisation de différents calendriers (par exemple, chinois, hébreu). Il comprend des méthodes pour les comparaisons, les conversions et le formatage des dates et des heures. La prise en charge par les navigateurs est expérimentale, Firefox Nightly ayant l'implémentation la plus aboutie. Un polyfill est disponible pour essayer Temporal dans n'importe quel navigateur. Librairies Un article sur les fetch size du JDBC et les impacts sur vos applications https://in.relation.to/2025/01/24/jdbc-fetch-size/ qui connait la valeur fetch size par default de son driver? en fonction de vos use cases, ca peut etre devastateur exemple d'une appli qui retourne 12 lignes et un fetch size de oracle a 10, 2 a/r pour rien et si c'est 50 lignres retournées la base de donnée est le facteur limitant, pas Java donc monter sont fetch size est avantageux, on utilise la memoire de Java pour eviter la latence Quarkus annouce les MCP servers project pour collecter les servier MCP en Java https://quarkus.io/blog/introducing-mcp-servers/ MCP d'Anthropic introspecteur de bases JDBC lecteur de filke system Dessine en Java FX demarrables facilement avec jbang et testes avec claude desktop, goose et mcp-cli permet d'utliser le pouvoir des librarires Java de votre IA d'ailleurs Spring a la version 0.6 de leur support MCP https://spring.io/blog/2025/01/23/spring-ai-mcp-0 Infrastructure Apache Flink sur Kibernetes https://www.decodable.co/blog/get-running-with-apache-flink-on-kubernetes-2 un article tres complet ejn deux parties sur l'installation de Flink sur Kubernetes installation, setup mais aussi le checkpointing, la HA, l'observablité Data et Intelligence Artificielle 10 techniques de prompt engineering https://medium.com/google-cloud/10-prompt-engineering-techniques-every-beginner-should-know-bf6c195916c7 Si vous voulez aller plus loin, l'article référence un très bon livre blanc sur le prompt engineering https://www.kaggle.com/whitepaper-prompt-engineering Les techniques évoquées : Zero-Shot Prompting: On demande directement à l'IA de répondre à une question sans lui fournir d'exemple préalable. C'est comme si on posait une question à une personne sans lui donner de contexte. Few-Shot Prompting: On donne à l'IA un ou plusieurs exemples de la tâche qu'on souhaite qu'elle accomplisse. C'est comme montrer à quelqu'un comment faire quelque chose avant de lui demander de le faire. System Prompting: On définit le contexte général et le but de la tâche pour l'IA. C'est comme donner à l'IA des instructions générales sur ce qu'elle doit faire. Role Prompting: On attribue un rôle spécifique à l'IA (enseignant, journaliste, etc.). C'est comme demander à quelqu'un de jouer un rôle spécifique. Contextual Prompting: On fournit des informations supplémentaires ou un contexte pour la tâche. C'est comme donner à quelqu'un toutes les informations nécessaires pour répondre à une question. Step-Back Prompting: On pose d'abord une question générale, puis on utilise la réponse pour poser une question plus spécifique. C'est comme poser une question ouverte avant de poser une question plus fermée. Chain-of-Thought Prompting: On demande à l'IA de montrer étape par étape comment elle arrive à sa conclusion. C'est comme demander à quelqu'un d'expliquer son raisonnement. Self-Consistency Prompting: On pose plusieurs fois la même question à l'IA et on compare les réponses pour trouver la plus cohérente. C'est comme vérifier une réponse en la posant sous différentes formes. Tree-of-Thoughts Prompting: On permet à l'IA d'explorer plusieurs chemins de raisonnement en même temps. C'est comme considérer toutes les options possibles avant de prendre une décision. ReAct Prompting: On permet à l'IA d'interagir avec des outils externes pour résoudre des problèmes complexes. C'est comme donner à quelqu'un les outils nécessaires pour résoudre un problème. Les patterns GenAI the thoughtworks https://martinfowler.com/articles/gen-ai-patterns/ tres introductif et pre RAG le direct prompt qui est un appel direct au LLM: limitations de connaissance et de controle de l'experience eval: evaluer la sortie d'un LLM avec plusieurs techniques mais fondamentalement une fonction qui prend la demande, la reponse et donc un score numerique evaluation via un LLM (le meme ou un autre), ou evaluation humaine tourner les evaluations a partir de la chaine de build amis aussi en live vu que les LLMs puvent evoluer. Decrit les embedding notament d'image amis aussi de texte avec la notion de contexte DeepSeek et la fin de la domination de NVidia https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvda un article sur les raisons pour lesquelles NVIDIA va se faire cahllengert sur ses marges 90% de marge quand meme parce que les plus gros GPU et CUDA qui est proprio mais des approches ardware alternatives existent qui sont plus efficientes (TPU et gros waffle) Google, MS et d'autres construisent leurs GPU alternatifs CUDA devient de moins en moins le linga franca avec l'investissement sur des langages intermediares alternatifs par Apple, Google OpenAI etc L'article parle de DeepSkeek qui est venu mettre une baffe dans le monde des LLMs Ils ont construit un competiteur a gpt4o et o1 avec 5M de dollars et des capacites de raisonnements impressionnant la cles c'etait beaucoup de trick d'optimisation mais le plus gros est d'avoir des poids de neurores sur 8 bits vs 32 pour les autres. et donc de quatizer au fil de l'eau et au moment de l'entrainement beaucoup de reinforcemnt learning innovatifs aussi et des Mixture of Expert donc ~50x moins chers que OpenAI Donc plus besoin de GPU qui on des tonnes de vRAM ah et DeepSeek est open source un article de semianalytics change un peu le narratif le papier de DeepSkeek en dit long via ses omissions par ensemple les 6M c'est juste l'inference en GPU, pas les couts de recherches et divers trials et erreurs en comparaison Claude Sonnet a coute 10M en infererence DeepSeek a beaucoup de CPU pre ban et ceratins post bans evalués a 5 Milliards en investissement. leurs avancées et leur ouverture reste extremement interessante Une intro à Apache Iceberg http://blog.ippon.fr/2025/01/17/la-revolution-des-donnees-lavenement-des-lakehouses-avec-apache-iceberg/ issue des limites du data lake. non structuré et des Data Warehouses aux limites en diversite de données et de volume entrent les lakehouse Et particulierement Apache Iceberg issue de Netflix gestion de schema mais flexible notion de copy en write vs merge on read en fonction de besoins garantie atomicite, coherence, isoliation et durabilite notion de time travel et rollback partitions cachées (qui abstraient la partition et ses transfos) et evolution de partitions compatbile avec les moteurs de calcul comme spark, trino, flink etc explique la structure des metadonnées et des données Guillaume s'amuse à générer des histoires courtes de Science-Fiction en programmant des Agents IA avec LangChain4j et aussi avec des workflows https://glaforge.dev/posts/2025/01/27/an-ai-agent-to-generate-short-scifi-stories/ https://glaforge.dev/posts/2025/01/31/a-genai-agent-with-a-real-workflow/ Création d'un générateur automatisé de nouvelles de science-fiction à l'aide de Gemini et Imagen en Java, LangChain4j, sur Google Cloud. Le système génère chaque nuit des histoires, complétées par des illustrations créées par le modèle Imagen 3, et les publie sur un site Web. Une étape d'auto-réflexion utilise Gemini pour sélectionner la meilleure image pour chaque chapitre. L'agent utilise un workflow explicite, drivé par le code Java, où les étapes sont prédéfinies dans le code, plutôt que de s'appuyer sur une planification basée sur LLM. Le code est disponible sur GitHub et l'application est déployée sur Google Cloud. L'article oppose les agents de workflow explicites aux agents autonomes, en soulignant les compromis de chaque approche. Car parfois, les Agent IA autonomes qui gèrent leur propre planning hallucinent un peu trop et n'établissent pas un plan correctement, ou ne le suive pas comme il faut, voire hallucine des “function call”. Le projet utilise Cloud Build, le Cloud Run jobs, Cloud Scheduler, Firestore comme base de données, et Firebase pour le déploiement et l'automatisation du frontend. Dans le deuxième article, L'approche est différente, Guillaume utilise un outil de Workflow, plutôt que de diriger le planning avec du code Java. L'approche impérative utilise du code Java explicite pour orchestrer le workflow, offrant ainsi un contrôle et une parallélisation précis. L'approche déclarative utilise un fichier YAML pour définir le workflow, en spécifiant les étapes, les entrées, les sorties et l'ordre d'exécution. Le workflow comprend les étapes permettant de générer une histoire avec Gemini 2, de créer une invite d'image, de générer des images avec Imagen 3 et d'enregistrer le résultat dans Cloud Firestore (base de donnée NoSQL). Les principaux avantages de l'approche impérative sont un contrôle précis, une parallélisation explicite et des outils de programmation familiers. Les principaux avantages de l'approche déclarative sont des définitions de workflow peut-être plus faciles à comprendre (même si c'est un YAML, berk !) la visualisation, l'évolutivité et une maintenance simplifiée (on peut juste changer le YAML dans la console, comme au bon vieux temps du PHP en prod). Les inconvénients de l'approche impérative incluent le besoin de connaissances en programmation, les défis potentiels en matière de maintenance et la gestion des conteneurs. Les inconvénients de l'approche déclarative incluent une création YAML pénible, un contrôle de parallélisation limité, l'absence d'émulateur local et un débogage moins intuitif. Le choix entre les approches dépend des exigences du projet, la déclarative étant adaptée aux workflows plus simples. L'article conclut que la planification déclarative peut aider les agents IA à rester concentrés et prévisibles. Outillage Vulnérabilité des proxy Maven https://github.blog/security/vulnerability-research/attacks-on-maven-proxy-repositories/ Quelque soit le langage, la techno, il est hautement conseillé de mettre en place des gestionnaires de repositories en tant que proxy pour mieux contrôler les dépendances qui contribuent à la création de vos produits Michael Stepankin de l'équipe GitHub Security Lab a cherché a savoir si ces derniers ne sont pas aussi sources de vulnérabilité en étudiant quelques CVEs sur des produits comme JFrog Artifactory, Sonatype Nexus, et Reposilite Certaines failles viennent de la UI des produits qui permettent d'afficher les artifacts (ex: mettez un JS dans un fichier POM) et même de naviguer dedans (ex: voir le contenu d'un jar / zip et on exploite l'API pour lire, voir modifier des fichiers du serveur en dehors des archives) Les artifacts peuvent aussi être compromis en jouant sur les paramètres propriétaires des URLs ou en jouant sur le nomage avec les encodings. Bref, rien n'est simple ni niveau. Tout système rajoute de la compléxité et il est important de les tenir à mettre à jour. Il faut surveiller activement sa chaine de distribution via différents moyens et ne pas tout miser sur le repository manager. L'auteur a fait une présentation sur le sujet : https://www.youtube.com/watch?v=0Z_QXtk0Z54 Apache Maven 4… Bientôt, c'est promis …. qu'est ce qu'il y aura dedans ? https://gnodet.github.io/maven4-presentation/ Et aussi https://github.com/Bukama/MavenStuff/blob/main/Maven4/whatsnewinmaven4.md Apache Maven 4 Doucement mais surement …. c'est le principe d'un projet Maven 4.0.0-rc-2 est dispo (Dec 2024). Maven a plus de 20 ans et est largement utilisé dans l'écosystème Java. La compatibilité ascendante a toujours été une priorité, mais elle a limité la flexibilité. Maven 4 introduit des changements significatifs, notamment un nouveau schéma de construction et des améliorations du code. Changements du POM Séparation du Build-POM et du Consumer-POM : Build-POM : Contient des informations propres à la construction (ex. plugins, configurations). Consumer-POM : Contient uniquement les informations nécessaires aux consommateurs d'artefacts (ex. dépendances). Nouveau Modèle Version 4.1.0 : Utilisé uniquement pour le Build-POM, alors que le Consumer-POM reste en 4.0.0 pour la compatibilité. Introduit de nouveaux éléments et en marque certains comme obsolètes. Modules renommés en sous-projets : “Modules” devient “Sous-projets” pour éviter la confusion avec les Modules Java. L'élément remplace (qui reste pris en charge). Nouveau type de packaging : “bom” (Bill of Materials) : Différencie les POMs parents et les BOMs de gestion des dépendances. Prend en charge les exclusions et les imports basés sur les classifiers. Déclaration explicite du répertoire racine : permet de définir explicitement le répertoire racine du projet. Élimine toute ambiguïté sur la localisation des racines de projet. Nouvelles variables de répertoire : ${project.rootDirectory}, ${session.topDirectory} et ${session.rootDirectory} pour une meilleure gestion des chemins. Remplace les anciennes solutions non officielles et variables internes obsolètes. Prise en charge de syntaxes alternatives pour le POM Introduction de ModelParser SPI permettant des syntaxes alternatives pour le POM. Apache Maven Hocon Extension est un exemple précoce de cette fonctionnalité. Améliorations pour les sous-projets Versioning automatique des parents Il n'est plus nécessaire de définir la version des parents dans chaque sous-projet. Fonctionne avec le modèle de version 4.1.0 et s'étend aux dépendances internes au projet. Support complet des variables compatibles CI Le Flatten Maven Plugin n'est plus requis. Prend en charge les variables comme ${revision} pour le versioning. Peut être défini via maven.config ou la ligne de commande (mvn verify -Drevision=4.0.1). Améliorations et corrections du Reactor Correction de bug : Gestion améliorée de --also-make lors de la reprise des builds. Nouvelle option --resume (-r) pour redémarrer à partir du dernier sous-projet en échec. Les sous-projets déjà construits avec succès sont ignorés lors de la reprise. Constructions sensibles aux sous-dossiers : Possibilité d'exécuter des outils sur des sous-projets sélectionnés uniquement. Recommandation : Utiliser mvn verify plutôt que mvn clean install. Autres Améliorations Timestamps cohérents pour tous les sous-projets dans les archives packagées. Déploiement amélioré : Le déploiement ne se produit que si tous les sous-projets sont construits avec succès. Changements de workflow, cycle de vie et exécution Java 17 requis pour exécuter Maven Java 17 est le JDK minimum requis pour exécuter Maven 4. Les anciennes versions de Java peuvent toujours être ciblées pour la compilation via Maven Toolchains. Java 17 a été préféré à Java 21 en raison d'un support à long terme plus étendu. Mise à jour des plugins et maintenance des applications Suppression des fonctionnalités obsolètes (ex. Plexus Containers, expressions ${pom.}). Mise à jour du Super POM, modifiant les versions par défaut des plugins. Les builds peuvent se comporter différemment ; définissez des versions fixes des plugins pour éviter les changements inattendus. Maven 4 affiche un avertissement si des versions par défaut sont utilisées. Nouveau paramètre “Fail on Severity” Le build peut échouer si des messages de log atteignent un niveau de gravité spécifique (ex. WARN). Utilisable via --fail-on-severity WARN ou -fos WARN. Maven Shell (mvnsh) Chaque exécution de mvn nécessitait auparavant un redémarrage complet de Java/Maven. Maven 4 introduit Maven Shell (mvnsh), qui maintient un processus Maven résident unique ouvert pour plusieurs commandes. Améliore la performance et réduit les temps de build. Alternative : Utilisez Maven Daemon (mvnd), qui gère un pool de processus Maven résidents. Architecture Un article sur les feature flags avec Unleash https://feeds.feedblitz.com//911939960/0/baeldungImplement-Feature-Flags-in-Java-With-Unleash Pour A/B testing et des cycles de développements plus rapides pour « tester en prod » Montre comment tourner sous docker unleash Et ajouter la librairie a du code java pour tester un feature flag Sécurité Keycloak 26.1 https://www.keycloak.org/2025/01/keycloak-2610-released.html detection des noeuds via la proble base de donnée aulieu echange reseau virtual threads pour infinispan et jgroups opentelemetry tracing supporté et plein de fonctionalités de sécurité Loi, société et organisation Les grands morceaux du coût et revenus d'une conférence. Ici http://bdx.io|bdx.io https://bsky.app/profile/ameliebenoit33.bsky.social/post/3lgzslhedzk2a 44% le billet 52% les sponsors 38% loc du lieu 29% traiteur et café 12% standiste 5% frais speaker (donc pas tous) Ask Me Anything Julien de Provin: J'aime beaucoup le mode “continuous testing” de Quarkus, et je me demandais s'il existait une alternative en dehors de Quarkus, ou à défaut, des ressources sur son fonctionnement ? J'aimerais beaucoup avoir un outil agnostique utilisable sur les projets non-Quarkus sur lesquels j'intervient, quitte à y metttre un peu d'huile de coude (ou de phalange pour le coup). https://github.com/infinitest/infinitest/ Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 6 mars 2025 : DevCon #24 : 100% IA - Paris (France) 13 mars 2025 : Oracle CloudWorld Tour Paris - Paris (France) 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19-21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27-28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 4 avril 2025 : aMP Orléans 04-04-2025 - Orléans (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 16-18 avril 2025 : Devoxx France - Paris (France) 23-25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day 2025 - Strasbourg (France) 29-30 avril 2025 : MixIT - Lyon (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Outlasting Noam Shazeer, crowdsourcing Chat + AI with >1.4m DAU, and becoming the "Western DeepSeek" — with William Beauchamp, Chai Research

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 26, 2025 75:46


One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro

The Modern Acre | Ag Built Different
386: The First Startup Incubator Facility Dedicated to Ag Robotics

The Modern Acre | Ag Built Different

Play Episode Listen Later Jan 14, 2025 31:03


Danny Bernstein is the Managing Partner of Reservoir Ventures and the CEO of the Reservoir, an ecosystem of nonprofit and for-profit ventures tackling California's most urgent challenges and opportunities. Before founding the Reservoir, Danny spent 20 years in Silicon Valley leading business development, partnerships, and developer programs at Google and Microsoft. At Google, he worked across products like Search, Chrome, Firebase, and Google Identity after the acquisition of Meebo, a Web 2.0 startup that was sold to Google 2012. At Microsoft, he led critical product lines for Microsoft Teams. — This episode is presented by MyLand. Learn more HERE. — Links The Reservoir - https://www.reservoir.co Danny on Linkedin - https://www.linkedin.com/in/dannybernstein/ Join the Co-op - https://themodernacre.supercast.com Subscribe to the Newsletter - https://themodernacre.substack.com

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

The Sprues and Brews Warhammer Podcast
Firebase: Dark Fantastic Mills

The Sprues and Brews Warhammer Podcast

Play Episode Listen Later Nov 20, 2024 89:54


Matt, Jay and Andy are back with special guests Gary and Steve from Dark Fantastic Mills! Those scenery wizards have joined the guys to chat about their latest Kickstarter Campaign which is running until Friday 22nd November (so get backing ASAP!). You can find the link to the project below: https://www.kickstarter.com/projects/darkfantasticmills/firebase-trench-zone Keeping it terrain, the gents also chat about their Top 3 terrain pieces, as well as reading out the community choices. Hobby updates and the latest Warhammer news return, as well as Matt's second go hosting our 'Guess Sprue' game.  Enjoy! Sprues & Brews Music Created by Dave Sheard Website: DaveSheard.com Twitter: twitter.com/dave_sheard sci fi atmosphere ph Sny 1b14 9.wav by ERH — https://freesound.org/s/163532/ — License: Attribution NonCommercial 4.0 Apple Podcasts Spotify Google Podcasts

The Next Wave - Your Chief A.I. Officer
Build An App with a Backend Using Ai in 20 min (Cursor Ai, Replit, Firebase, Wispr Flow)

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Nov 12, 2024 39:34


Episode 32: How can you build an app with a backend using AI in just 20 minutes? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) sit down with AI enthusiast Riley Brown (https://x.com/rileybrown_ai) to explore this exciting and challenging process. In this episode, Riley brings his unique perspective and experience, from a non-coder to a developer leveraging AI tools. The discussion covers Riley's journey, the tools he recommends for beginners, like Cursor and Replit, and the integration with Firebase for seamless app development. They venture into creating a simple web app, discuss the evolution of app capabilities, and contemplate innovative features and platforms driven by AI. Whether you're a novice or an experienced developer, this episode offers a wealth of insights and practical advice. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Riley Brown shares app-building methods, templates. (04:15) Using Claude artifacts for code generation amazed me. (08:35) Start with Cursor, avoid multiple tool distractions. (09:34) Codebase setup using SSH for syncing changes. (12:55) AI integrates and updates code in steps. (17:49) App to log and track AI skill development. (20:04) Tools: Cursor, Firebase, Replit for project management. (25:12) Discusses free use of Replit, Firebase, Cursor. (27:32) App for threading voice notes and AI formatting. (30:58) Appreciating design effort; seeking AI improvement. (33:31) Building community to create apps efficiently. (35:20) Follow Riley Brown on X, subscribe YouTube. — Mentions: Riley Brown: https://community.softwarecomposer.com/c/templates/replit-templates https://replit.com/@an732001/Riley-and-Ansh-Full-Stack-Nextjs-Template-version-1?v=1#README.md Software Composer: https://www.softwarecomposer.com/ Cursor: https://www.cursor.so/ Replit: https://replit.com/ Firebase: https://firebase.google.com/ Midjourney: https://www.midjourney.com/ Claude: https://www.anthropic.com/index/claude Wispr Flow: https://www.flowvoice.ai/ — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

The Next Wave - Your Chief A.I. Officer
How To Build Your First AI Business From Scratch (Steps & Tools)

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Oct 22, 2024 43:56


Episode 29: Are you ready to dive into the world of AI-driven newsletter creation and content strategy? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) discuss the tools, techniques, and insider secrets to building a successful AI-powered business from scratch. In this episode, they explore AI's role in streamlining newsletter creation, bundling media properties for better monetization, and maintaining the crucial human touch for quality and engagement. Plus, they share their personal experiences and strategies for leveraging AI tools like Perplexity, Claude, Mixo, and many more to validate business ideas and enhance content production. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Leverage AI tools, don't start AI businesses. (03:43) Mixo creates instant landing pages using prompts. (09:23) Firebase, Replit, and AI simplify business startup. (12:54) Amazon sellers use sentiment analysis to improve products. (14:14) Focus on human-centric content creation amid AI. (18:17) Mix of memorization and automated video editing. (22:12) AI-generated avatars as news anchors increasing. (24:48) AI simplifies newsletter creation with writing, editing. (28:50) Newsletters need unique voices for long-term success. (32:01) Timing was crucial for YouTube success. (34:49) Automated tool summary solution using Perplexity. (38:56) Covered strategies, tools, and ideas for businesses. — Mentions: Mixo: https://www.mixo.io/ Perplexity: https://www.perplexity.ai/ Claude: https://claude.ai/ Futureloop: https://www.diamandis.com/futureloop Timebolt: https://www.timebolt.io/ Firebase: https://firebase.google.com/ Cursor: https://www.cursor.com/ Webflow: https://webflow.com/ — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Junkfood Cinema
The Siege of Firebase Gloria

Junkfood Cinema

Play Episode Listen Later Oct 1, 2024 77:34


The grand finale of BTSeptember finds Brian & Cargill on a tour of duty through hell as they try and survive The Siege of Firebase Gloria.Support us on Patreon! 

SCRIPTease
085 | Talentpilot – Tomáš Zrubecký, CEO

SCRIPTease

Play Episode Listen Later Aug 1, 2024 61:28


Tom se rozhodl, že do světa recruitmentu a HR managementu přinese umělou inteligenci. Založil Talentpilot, začal spolupracovat s OpenAI, FF UK i Akademií věd ČR a vytvořil vědou a daty poháněný model. Klienti díky němu mohou stavět lepší, kompatibilnější, stabilnější a spokojenější týmy

Diaspora.nz
S2 | E3 — Paul Copplestone (Co-founder & CEO at Supabase) on why open source is an unfair advantage; raising $116M to build the tech stack for AI startups/indie developers; living in Southeast Asia.

Diaspora.nz

Play Episode Listen Later Jul 18, 2024 32:55


Paul Copplestone is the co-founder and CEO at Supabase, the “open source Firebase alternative for building web and mobile apps.” If you were wondering what that means or why you should prioritise a listen— in Paul's words: “if you're going to build your next startup, you'd probably choose us … and we'll provide all the tools you need to get started: a Postgres database, authentication system, file storage… the works.”Today, Supabase is prolific. One of the most commonly called out products by makers on Product Hunt; one of the most redeemed Y Combinator “perks” with nearly a third of the most recent YC batch using it; they've secured a place as back-end infrastructure of choice for many founders setting out to build AI-centric applications.Paul has come a long way from his family farm near Kaikoura. Before Supabase, he co-founded South East Asian-based home-services startups ServisHero and Nimbus For Work, and participated in Entrepreneur First, Singapore. Today, he & co-founder Ant have raised $116M and lead a globally distributed company with 80 employees over 30+ countries. With their ambition and vision, it's clear they're just getting started.In today's episode, we discuss:* Paul's journey from NZ to Malaysia and now Singapore.* Building and scaling Supabase as a globally distributed team.* The impact of AI on software development, and what to use if you're getting started today.* Underrated benefits of open source for recruiting, growth, and how to think about product development.Links:Supabase:* Supabase website: https://supabase.com/* Supabase on Twitter/X (great follow) https://x.com/supabase* $80M Series B announcement: https://techcrunch.com/2022/05/10/supabase-raises-80m-series-b-for-its-open-source-firebase-alternative/Paul:* LinkedIn: https://www.linkedin.com/in/paulcopplestone/* Twitter/X: https://x.com/kiwicopple* Blog: https://paul.copplest.one/blog/ (so much good stuff in here)* GitHub: https://github.com/kiwicoppleTimestamps:(00:00) Intro(01:18) Paul's origin story(04:05) Founding ServisHero in Malaysia(06:23) Joining the Entrepreneur First program(08:58) Why founders should think about setting up their HQ in Singapore(09:57) Founding Supabase(11:36) The benefits of open source(13:43) Insight into Supabase customers and how AI is changing the game(15:33) How open source helps with recruiting(18:18) Taking a “product led growth” approach to enterprise customers.(21:12) When/how Supabase will “cross the chasm” as it matures into a enterprise customer base.(22:19) How AI is changing devtools(24:09) Paul's angel investing thesis(26:33) Thinking about companies like countries(28:59) Paul's favourite blog posts from his personal archive(31:06) How we can be helpful to Paul!Subscribe at diaspora.nz to receive new episodes every Friday! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.diaspora.nz

Purrfect.dev
4.16 - Google I/O Flutter Updates and Firebase

Purrfect.dev

Play Episode Listen Later Jul 5, 2024 49:20


Catch the latest updates on Flutter and Firebase from Google I/O 2023. Hear Roman Jacquez's insights and explore the exciting new features. Share your thoughts!https://codingcat.dev/podcast/google-i-o-flutter-updates-and-firebase 00:00 Introduction 00:18 Meet Roman 00:34 Google I/O Highlights 01:13 Flutter Moments 01:35 Roman's Background 02:25 Career Journey 05:55 Philips Health System 10:49 Fl |wb 12:25 What's New in Flutter 20:46 Firebase Data Connect 27:59 Firebase Features 36:17 Multimodal Integration 41:32 Flame 2D Engine 47:31 Conclusion --- Support this podcast: https://podcasters.spotify.com/pod/show/codingcatdev/support

Purrfect.dev
4.15 - Using Firebase with Communication APIs

Purrfect.dev

Play Episode Listen Later Jul 3, 2024 59:48


Jump into "UsingFirebaseWithCommunicationsAPI.mp4" with Amanda and learn how to leverage Firebase and Vonage APIs for building secure, interactive web experiences. Comment and share! https://codingcat.dev/podcast/using-firebase-with-communication-apis 00:00 Introduction 00:20 Meet Amanda 01:48 Amanda's Background 04:10 Dual Language Education 05:16 In-Person Meetups 10:28 Developer Advocacy 16:01 Firebase & Vonage Demo 27:40 Setting up Firebase 34:52 Deploying Functions 46:39 Additional Resources 58:57 Conclusion --- Send in a voice message: https://podcasters.spotify.com/pod/show/codingcatdev/message Support this podcast: https://podcasters.spotify.com/pod/show/codingcatdev/support

Purrfect.dev
4.14 - What's A Firebase Developer Advocate

Purrfect.dev

Play Episode Listen Later Jul 1, 2024 33:19


https://codingcat.dev/podcast/what-is-a-firebase-developer-advocate --- Send in a voice message: https://podcasters.spotify.com/pod/show/codingcatdev/message Support this podcast: https://podcasters.spotify.com/pod/show/codingcatdev/support

Syntax - Tasty Web Development Treats
788: Supabase: Open Source Firebase for Fullstack JS Apps

Syntax - Tasty Web Development Treats

Play Episode Listen Later Jun 28, 2024 53:45


Scott and CJ chat with Paul Copplestone, CEO and co-founder of Supabase, about the journey of building an open source alternative to Firebase. Learn about the tech stack, the story behind their excellent documentation, and how Supabase balances business goals with open-source values. Show Notes 00:00 Welcome to Syntax! 00:30 Who is Paul Copplestone? 01:17 Why ‘Supa' and not ‘Super'? 02:26 How did Supabase start? 04:29 How long from inception to joining Y Combinator? 05:10 Was it always intended to be open source? Why Open Source. 07:22 How many users chose to self-host? 07:49 Open source mindset. 08:42 Simplicity in design. 10:32 How do you take Supabase one step beyond the competition? 12:35 How do you decide which libraries are officially supported vs community maintained? 15:17 You don't need a client library! 16:48 Edge functions for server-side functionality. 18:51 The genesis of pgvector. 20:59 The product strategy. 22:25 What's the story behind Supabase's awesome docs? 25:26 The tech behind Supabase. 25:39 What is the UI built on? 27:33 Consolidation follows kaizen. 28:54 What else is involved in the stack? 31:47 Authentication. 32:35 Storage engine. 33:13 For self-hosting. 35:46 How do you balance business goals with open source? 42:01 What's next for Supabase? 44:15 Supabase's GA + new features. Top 10 LAunches from Supabase GA Week. 48:24 Who runs the X account? 50:39 Sick Picks + Shameless Plugs. Sick Picks Paul: Apple Vision Pro. Shameless Plugs Paul: PostgreSQL. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads CJ: X Instagram YouTube TwitchTV Randy: X Instagram YouTube Threads

Purrfect.dev
4.13 - Firebase Security Rules: Effortless control over your app's data.

Purrfect.dev

Play Episode Listen Later Jun 3, 2024 52:46


Firebase Security Rules are a powerful feature that allows you to control access to your app's data in Firebase Realtime Database, Cloud Firestore, and Cloud Storage. --- Send in a voice message: https://podcasters.spotify.com/pod/show/codingcatdev/message Support this podcast: https://podcasters.spotify.com/pod/show/codingcatdev/support

Les Cast Codeurs Podcast
LCC 312 - Dans la ferme de Mathurin IA IA IO !

Les Cast Codeurs Podcast

Play Episode Listen Later May 21, 2024 113:38


Dans ce long…. épisode, Emmanuel, Guillaume et Arnaud discutent de l'actualité avec Chicori (un runtime WASM en Java), Jakarta Data, Quarkus 3.10, Spring AI, Hibernate 6.5, mais aussi quelques retours aux basiques (timezones, rate limiting, …). Gros focus sur les nouveautés annoncées à Google I/O 2024 et dans l'écosystème IA en général avec les annonces d'OpenAI, Claude, Grok et d'autres. Différents outils sont aussi couverts comme Git, IntelliJ, ASDF, BLD, S3. Et enfin des sujets sur la haute disponibilité de Keycloak, la ré-indexation sans downtime, les challenges des implémentations alternatives, le mode vigilant dans GitHub, Redis et les changements de license, et les investissements de Microsoft et AWS en France dans le cadre du programme #ChooseFrance. N'hésitez pas à nous soumettre vos questions sur https://lescastcodeurs.com/ama nous y répondrons dans les prochains épisodes. Enregistré le 17 mai 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-312.mp3 News Langages Un runtime WASM en Java https://github.com/dylibso/chicory Projet tout nouveau, encore loin de la maturité Mais intéressant à suivre pour exécuter du code WebAssembly dans une application Java le projet n'a pas 15 jours non plus quand même :) Faire tourner des plugins WASM dans la JVM (e.g. plugins) On peut faire des heap dump en cas de OutOfMemoryException en compilation native https://quarkus.io/blog/heapdump-oome-native/ depuis JDK 21 Un exemple avec Quarkus Et le GC epsilon 100 exercices pour se mettre à Rust https://rust-exercises.com/ Librairies Hibernate 6.5 est sorti https://in.relation.to/2024/04/25/orm-650/ cache full pour les entités et leur collections (le défaut est shallow) Java record pour les @IdClass Les filtres peuvent être auto activés par défaut (vs à faire sur chaque session). Les filtres sont pas mal pour gérer par exemple des soft delete Keybased pagination pour éviter les trous de résultant en cas de modification d'entités en parallèle de.une recherche paginée. S.appuie sur une clé unique et ordonnée genre ISBN Une tech preview de Jakarta Data En parlant de Jakarta Data, deux articles sur le sujet https://in.relation.to/2024/04/01/jakarta-data-1/ https://in.relation.to/2024/04/18/jakarta-data-1/ concept de repository pas lié à une entité mais à une relation logique entre les recherches interagit via stateless session et est un bean CDI Code généré bien sur 4 opérateur crud et les requêtes save est up sert Type sage au sens ou le nom des méthodes n'est pas la logique de recherche Annotation et nom des paramètres et c'est type safe via un annotation processor ou string dans @Query qui est type safe aussi via le processeur discute plus de type safety et pagination Quarkus 3.10 avec quelques nouveautés https://quarkus.io/blog/quarkus-3-10-0-released/ flyway 10 arrive avec support natif Hibernate search supporte le standalone POJO mapper notamment pour elastic search (pas que ORM) Modification des propriétés Quarkus.package automatiquement remplacées par quarkus update et Quarkus 3.9 a fait son grand renommage réactif https://quarkus.io/blog/quarkus-3-9-1-released/ Clarifier que les extensions réactive n'imposent pas des apis réactives et seulement leur cœur implémenté en réactif ou offre optionellement des apis reacrive Les gens pensaient à tors que les réactives imposaient le modèle de programmation la encore quarkus update à la rescousse Un article sur l'api structured output pour Spring AI https://spring.io/blog/2024/05/09/spring-ai-structured-output un article descriptif sur quand cette api est utilisée Et les détails de son usage Comment passer une TimeZone dans spring boot et ce que cela impacte en terme de composants https://www.baeldung.com/spring-boot-set-default-timezone du basique mais toujours utile Task ou app Programmatiquement Sur certains lifecycles de Spring Infrastructure Un article et la vidéo de Devoxx France sur la haute disponibilité de Keycloak, comment c'est implémenté https://www.keycloak.org/2024/05/keycloak-at-devoxx-france-2024-recap l'infra d'identité est une infra clé Donc gérer la haute disponibilité est critique C'est un article qui pointe sur une vidéo de Devoxx France et la doc de keycloak sur comment tout cela est implémenté Cloud Comment se ruiner avec des buckets S3 https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1 Amazon fait payer pour les requêtes non autorisées Il suffit de connaître le nom d'un bucket pour faire payer son propriétaire Amazon travaille pour fournir une solution / un fix. il est tombé par hasard sur un nom de bucket utilisé « pour de faux » par un outil open source populaire Bien rajouter un suffixe à ses buckets peut réduire le risque Mais pas l'éliminer un fix a été livré par amazon https://aws.amazon.com/about-aws/whats-new/2024/05/amazon-s3-no-charge-http-error-codes/ Data et Intelligence Artificielle Guillaume résume GoogleIO https://x.com/techcrunch/status/1790504691945898300?s=61&t=WImtt07yTQMhhoNPN6lYEw AI overview plus besoin d'aller sur les sites Google I/O 2024 Google I/O 2024 résumé en vidéo de 10 minutes https://www.youtube.com/watch?v=WsEQjeZoEng et en 100 bullet points https://blog.google/technology/ai/google-io-2024-100-announcements/ Message de Sundar Pichai https://blog.google/inside-google/message-ceo/google-io-2024-keynote-sundar-pichai/#creating-the-future Project Astra, un assistant universel, sur smartphone avec qui on peut avoir une conversation normale et à qui montrer avec la caméra ce qui nous entoure https://www.theverge.com/2024/5/14/24156296/google-ai-gemini-astra-assistant-live-io Nouveau modèle Gemini 1.5 Flash, quasi aussi performant que le nouveau Gemini 1.5 Pro, mais beaucoup plus rapide (premiers tokens dans la seconde) et aussi moins cher https://blog.google/technology/developers/gemini-gemma-developer-updates-may-2024/ Gemini 1.5 Pro est Gemini 1.5 Flash sont disponibles avec une fenêtre de contexte d'un million de tokens, mais il y a une liste d'attente pour tester une fenêtre de 2 millions de tokens https://aistudio.google.com/app/waitlist/97595554 https://cloud.google.com/earlyaccess/cloud-ai?e=48754805&hl=en PaliGemma un nouveau modèle de vision ouvert dans la famille Gemma (pour faire du Q&A du sous-titrage) et preview de Gemma 2, avec une version à 27 milliards de paramètres https://developers.googleblog.com/en/gemma-family-and-toolkit-expansion-io-2024/ Gemini disponible dans les IDEs : Android Studio, IDX, Firebase, Colab, VSCode, Cloud and Intellj Gemini AI Studio enfin disponible en Europe Gemini supporte le parallel function calling et l'extraction de frame dans les vidéos Trillium, la 6ème version des TPU (Tensor Processing Unit), les processeurs spécifiques ML dans Google Cloud, 5 fois plus puissant que la génération précédente et 67% plus efficace en énergie https://cloud.google.com/blog/products/compute/introducing-trillium-6th-gen-tpus Le projet NotebookLM rajoute une fonctionnalité de Audio Overview qui permet de discuter avec son corpus de documents avec une conversation vocale https://notebooklm.google.com/ On peut appliquer le “grounding” avec Google Search pour l'API Gemini, pour que le modèle Gemini puisse chercher des informations complémentaires dans Google Search https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-io-announcements Annonce de Imagen 3, la future version de du modèle de génération d'images Imagen qui améliore la qualité et possède un très bon support du texte dans les images (objectif de disponibilité à l'été) https://blog.google/technology/ai/google-generative-ai-veo-imagen-3/#Imagen-3 https://deepmind.google/technologies/imagen-3/ DeepMind annonce Veo, un nouveau modèle de génération de vidéo très convaincant qui peut faire des vidéos en 1080p de 60s, mais en combinant plusieurs prompts successifs, il peut générer des vidéos plus longues qui s'enchainent https://deepmind.google/technologies/veo/ VideoFX, ImageFX et MusicFX, des expérimentations de Google AI intégrant Imagen 3 et Veo (pas encore disponibles en Europe) https://blog.google/technology/ai/google-labs-video-fx-generative-ai/ Gemini Advanced https://blog.google/products/gemini/google-gemini-update-may-2024/#context-window Les utilisateurs de Gemini Advanced (l'application web) utilisent Gemini 1.5 Pro avec la fenêtre de contexte de 1 million de tokens, la possibilité de charger des documents de Google Drive, et bientôt la possibilité de générer des graphiques. Gemini Advanced rajoute aussi la capacité de générer des itinéraires de voyage (avec intégration de Google Flights, etc) Fonctionnalité Gemini Live pour avoir une conversation vocale naturelle avec Gemini https://blog.google/products/gemini/google-gemini-update-may-2024/#gemini-live Gem : des plugins pour Gemini Advanced pour créer ses propres assistants personnalisés https://blog.google/products/gemini/google-gemini-update-may-2024/#personalize-gems Ask Photos, on peut poser à Google Photos des questions plus complexes comme “quelle est ma plaque d'immatriculation” et Photos devine que parmi toutes les photos de voitures lequelle est certainement la nôtre et extrait le numéro de plaque https://blog.google/products/photos/ask-photos-google-io-2024/ Même dans Google Messages vous pourrez échanger avec Gemini Google Search https://blog.google/products/search/generative-ai-google-search-may-2024/ Rajout d'un modèle Gemini spécial search intégré qui permet à Google Search de répondre aux questions de la barre de recherche avec une raisonnement multi-étapes, en étant capable de faire de la planification, en mode multimodal (texte, image, vidéo, audio) Planning de repas et de voyage, supporté dans Gemini, va arriver aussi dans Search Gemini 1.5 Pro est disponible dans le panneau latéral de Gmail, Docs, Sheets, Drive https://blog.google/products/workspace/google-gemini-workspace-may-2024-updates/ SynthID va même fonctionner pour du texte https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/ Gemini Nano bientôt disponible dans les prochaines version de Chrome, pour utiliser le LLM directement dans le navigateur Android Seconde béta d'Android 15 https://android-developers.googleblog.com/2024/05/the-second-beta-of-android-15.html Private space pour garder des apps secures avec un niveau d'authentification supplémentaire Google collabore avec Samsung et Qualcomm sur la réalité augmentée dans Android https://developers.googleblog.com/en/google-ar-at-io-2024-new-geospatial-ar-features-and-more/ Project Gameface arrive sur Android (pour diriger Android avec les yeux, avec les expressions du visage, pour l'accessibilité) https://developers.googleblog.com/en/project-gameface-launches-on-android/ Gemini Nano va passer en multimodal, pas juste du texte Circle to search étendu à 100 millions de téléphones supplémentaires supportant Nano et va permettre de poser des questions, par exemple pour l'aide aux devoirs des enfants https://blog.google/products/android/google-ai-android-update-io-2024/#circle-to-search Detect phone scam on device with Gemini Nano Talkback, l'application pour l'accessibilité dans Android, va tirer parti de la multimodalité de Gemini Nano Bientôt de la génération d'image qu'on pourra intégrer dans ses mails, ses messages Wear OS https://android-developers.googleblog.com/2024/05/whats-new-in-wear-os-io-24.html Travail sur l'économie d'énergie pour faire durer les montres plus longtemps avant la prochaine recharge. Par exemple, 20% de consommation en moins lorsqu'on court un marathon ! Plus de type de données pour les activités physiques Project IDX accessible sans liste d'attente https://developers.googleblog.com/en/start-building-with-project-idx-today/ Firebase annonce 3 nouveaux produits https://developers.googleblog.com/en/whats-new-in-firebase-io-24/ Data Connect, un backend-as-a-service avec PostgreSQL https://firebase.google.com/products/data-connect App Hosting, hosting d'application Next et Angular https://firebase.google.com/products/app-hosting Genkit, a GenAI framework for app developers https://firebase.google.com/products/genkit Dart 3.4 avec support de Wasm comme target de compilation https://medium.com/dartlang/dart-3-4-bd8d23b4462a OpenAI lance son nouveau modèle: gpt-4o http://openai.com/index/hello-gpt-4o/ https://x.com/openaidevs/status/1790083108831899854?s=46&t=GLj1NFxZoCFCjw2oYpiJpw Audio, vision et reconnaissance de texte en realtime Plus rapide et 50% moins cher que son prédécesseur 4-turbo https://claude.ai/ est disponible en europe Claude, le modèle est créé par Anthropic: Claude est un assistant IA basé sur un grand modèle de langage entraîné selon des principes éthiques stricts. Il accorde une grande importance à l'honnêteté, l'impartialité et le respect de l'être humain. Son raisonnement repose sur une compréhension profonde des concepts plutôt que sur de simples associations statistiques. Il cherche activement à corriger les éventuels biais ou erreurs. Claude est polyvalent et peut s'adapter à différents styles de communication et niveaux de complexité selon le contexte. Il maîtrise de nombreux domaines académiques et scientifiques. Il est capable d'introspection sur ses propres processus de pensée et ses limitations. La vie privée et la confidentialité sont des priorités pour lui. Claude continue d'apprendre et de s'améliorer grâce aux interactions avec les humains. Son but est d'être un assistant fiable, éthique et bienveillant. quelqu'un sait comment ils font pour raisonner et pas juste LLM statistiquer? Comment ils prouvent cela ? C'est du code à part? Grok le modèle de X/Twitter/Musk est aussi dispo en Europe https://x.com/x/status/1790917272355172401?s=46&t=GLj1NFxZoCFCjw2oYpiJpw un truc unique c'est qu'il utilise les tweet comme reference sur ce qu'il dit. Par exemple demande les meilleurs Java Champions et c'est sur les tweet recents , probablement une sorte de RAG ou une sorte de fine tuning sur les derniers tweets, je ne sais pas L'algorithm des modeles de diffusion expliqués https://x.com/emmanuelbernard/status/1787565568020619650 deux articles, un general et lisible l'autre plus abscon mais avec certains details interessants sur le downsizing étapes ajout de bruit à des images (learning) pour après appliquer le process opposé le reverse diffusion process On prédit le bruit à enlever, on l'enlève et on repère le processus. Et tout cela est influencé par le prompt. Reindexation sans downtime des données de documentation de Quarkus, en quarkus bien sûr https://quarkus.io/blog/search-indexing-rollover/ utilise hibernate search Utilisé Elasticsearch / opensearch Article qui explique une des approches pour reindexer sans downtime via index alias Outillage Un article qui parle de l'outil de build bld, peu connu, qui permet d'écrire ses builds simplement dans une classe Java https://sombriks.com/blog/0070-build-with-bld-and-why-it-matters/ IntelliJ 2024.1 est sorti https://blog.jetbrains.com/idea/2024/05/what-s-new-in-intellij-idea-ultimate-2024-1/ complétion de ligne entière (deep learning) Assistant AI amélioré Spring Boot support amélioré sur bean completion et génération de diagramme Support de dev containers simplifié Amélioration support quarkus avec notamment icône dev ui et config des tests Support OpenRewrite Server wiremock et plein d'autres choses En version beta public, Homebrew permet de vérifier la provenance des packages (bottles) https://blog.trailofbits.com/2024/05/14/a-peek-into-build-provenance-for-homebrew/ Basé sur le système “build provenance” de sigstore https://docs.sigstore.dev/verifying/attestation/#validate-in-toto-attestations qui repose sur les attestations in-toto https://in-toto.io/ Mettez à jour git en version 2.45.1 pour fixer des failles de sécurité https://github.blog/2024-05-14-securing-git-addressing-5-new-vulnerabilities/ CVE-2024-32002 (Critique, Windows & macOS) : Les repos Git avec des sous-modules peuvent tromper Git pour lui faire exécuter un hook (élément de script) à partir du répertoire .git/ pendant une opération de clonage, permettant l'exécution de code à distance (Remote Code Execution). CVE-2024-32004 (Important, machines multi-utilisateurs) : Un attaquant peut concevoir un repo local qui exécute du code arbitraire lors du clonage. CVE-2024-32465 (Important, toutes les configurations) : Le clonage à partir de fichiers .zip contenant des repos Git peut contourner les protections, et potentiellement exécuter des hooks malveillants. CVE-2024-32020 (Faible, machines multi-utilisateurs) : Les clones locaux sur le même disque peuvent permettre à des utilisateurs non approuvés de modifier des fichiers liés physiquement (hard link) dans la base de données des objets du repo cloné. CVE-2024-32021 (Faible, machines multi-utilisateurs) : Le clonage d'un repo local avec des liens symboliques (symlinks) peut entraîner la création de liens physiques vers des fichiers arbitraires dans le répertoire objects/. Architecture Visualisation des algorithmes de rate limitation https://smudge.ai/blog/ratelimit-algorithms Méthodologies Le problème de l'implémentation alternative https://pointersgonewild.com/2024/04/20/the-alternative-implementation-problem/ Article par un développeur qui a développé des Just-in-Time compiler pour différents langages Remarqué que développer une implémentation alternative d'un langage (par exemple) n'a jamais vraiment rencontré le succès Les gens préfèrent l'original à une alternative qui est dépendante de / a peine à suivre l'implémentation d'origine Pour son cas, sur le JIT, il a travaillé sur un JIT intégré directement dans CRuby (plutôt que faire son implémentation alternative comme TruffleRuby), et sont JIT est intégré maintenant dedans directement Plus facile de rejoindre / s'intégrer au projet plutôt que d'être une alternative pour laquelle il faut convaincre les gens de l'adopter Le mode vigilant dans GitHub https://x.com/emmanuelbernard/status/1790026210619068435 c'est la suite du blog wsur la signature des commits que j'ai fait ul y a quelques temps https://emmanuelbernard.com/blog/2023/11/27/git-signing-ssh/ Maintenant, GitHub rajoute de plus en plus d'infos si les signatures ne matchent pas ou ne sont pas présentes Loi, société et organisation Une perspective sur Redis et les changements de license par un devrel AWS OpenSearch https://www.infoworld.com/article/3715247/the-end-of-vendor-backed-open-source.html les sociétés regardent l'impact légal des licenses source available pour elles même en usage interne Ça casse l'écosystème de spécialisations au dessus du produit (logz.io au dessus d'elastic démarré avant le changement de license) Redis top 10 contribs à AWS et Alibaba er Huawei et 3 redis. Donc c'est pas redis qui contribue tout. La plupart des ingénieurs de redislab ne bossent pas sur redis OSS, mais sur cloud et entreprise Peut être la fin des single vendor oss Il n'y a que les cloud providers qui peuvent fournir du OSS sans affecter leur structure du coût C'est un ex AWS en fait. Maintenant indépendant Microsoft va investir 4 milliards en France (datacenters et IA) https://news.microsoft.com/fr-fr/2024/05/13/microsoft-announces-the-largest-investment-to-date-in-france-to-accelerate-the-adoption-of-ai-skilling-and-innovation/ Il ne sont pas les seuls dans le cadre du programme #chooseFrance https://www.info.gouv.fr/actualite/choose-france-un-record-de-15-milliards-deuros-dinvestissements-etrangers Mais cela n'est pas sans laisser de questions sur l'avenir de notre activité avec les US qui externalisent désormais leur silicon valley https://www.cybernetica.fr/la-france-laboratoire-de-la-silicon-valley-2-0/ Outils de l'épisode ASDF un gestionnaire de version multi-runtime https://asdf-vm.com Arnaud l'avait recommandé mais je restais sur rvm apres des deboires, je suis passé a asdf, qui fonctionne mais pour le jdk j'utilise sdkman pour les javaistes ca parrait plus poussé Conférences Les videos de Devoxx France sont en ligne https://www.youtube.com/playlist?list=PLTbQvx84FrARars1vXos7mlPdvYJmsEoK La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 16-17 mai 2024 : Newcrafts Paris - Paris (France) 22 mai 2024 : OpenInfra Day France - Palaiseau (France) 22-25 mai 2024 : Viva Tech - Paris (France) 24 mai 2024 : AFUP Day Nancy - Nancy (France) 24 mai 2024 : AFUP Day Poitiers - Poitiers (France) 24 mai 2024 : AFUP Day Lille - Lille (France) 24 mai 2024 : AFUP Day Lyon - Lyon (France) 28-29 mai 2024 : Symfony Live Paris - Paris (France) 1 juin 2024 : PolyCloud - Montpellier (France) 6 juin 2024 : WAX 2024 - Aix-en-Provence (France) 6-7 juin 2024 : DevFest Lille - Lille (France) 6-7 juin 2024 : Alpes Craft - Grenoble (France) 7 juin 2024 : Fork it! Community - Rouen (France) 11 juin 2024 : Cloud Toulouse - Toulouse (France) 11-12 juin 2024 : OW2con - Paris (France) 11-12 juin 2024 : PGDay Lille - Lille (France) 12-14 juin 2024 : Rencontres R - Vannes (France) 13-14 juin 2024 : Agile Tour Toulouse - Toulouse (France) 14 juin 2024 : DevQuest - Niort (France) 18 juin 2024 : Mobilis In Mobile 2024 - Nantes (France) 18 juin 2024 : BSides Strasbourg 2024 - Strasbourg (France) 18 juin 2024 : Tech & Wine 2024 - Lyon (France) 19-20 juin 2024 : AI_dev: Open Source GenAI & ML Summit Europe - Paris (France) 19-21 juin 2024 : Devoxx Poland - Krakow (Poland) 26-28 juin 2024 : Breizhcamp 2024 - Rennes (France) 27 juin 2024 : DotJS - Paris (France) 27-28 juin 2024 : Agi Lille - Lille (France) 4-5 juillet 2024 : Sunny Tech - Montpellier (France) 8-10 juillet 2024 : Riviera DEV - Sophia Antipolis (France) 6 septembre 2024 : JUG Summer Camp - La Rochelle (France) 6-7 septembre 2024 : Agile Pays Basque - Bidart (France) 17 septembre 2024 : We Love Speed - Nantes (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 25-26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2-4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 10 octobre 2024 : Cloud Nord - Lille (France) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10-11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11-12 octobre 2024 : SecSea2k24 - La Ciotat (France) 16 octobre 2024 : DotPy - Paris (France) 17-18 octobre 2024 : DevFest Nantes - Nantes (France) 17-18 octobre 2024 : DotAI - Paris (France) 30-31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30-31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024-3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13-14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 3-5 décembre 2024 : APIdays Paris - Paris (France) 4-5 décembre 2024 : Open Source Experience - Paris (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 16-18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

COMPRESSEDfm
175 | Designing Infrastructure for Product Engineers

COMPRESSEDfm

Play Episode Listen Later May 14, 2024 53:34


In this episode, James Quick and Amy Dutton chat with James Cowling, co-founder of Convex, about designing infrastructure for product engineers. James explains the innovative features of Convex, including its JavaScript-based queries and real-time data subscriptions, and compares it to Firebase. They also discuss the challenges of edge computing, the importance of user state, and the role of AI in modern development.Show Notes[00:00:00] - Introduction to the Episode[00:01:00] - James Cowling's Background and Convex OverviewConvex[00:01:52] - Deep Dive into Convex[00:05:29] - User State and Application Development[00:07:05] - Challenges of Edge Computing[00:09:53] - Automatic Caching and Real-Time Updates[00:13:22] - AI and Backend Integration[00:17:01] - Leveraging AI in Applications[00:21:11] - Convex's Infrastructure and Technology[00:25:28] - Comparisons with Other Platforms[00:30:03] - Server Rendering and Data Storage[00:33:19] - Physical Challenges in Data Centers[00:37:04] - Cost Efficiency and Cloud Platforms[00:40:56] - Final Thoughts on Infrastructure[00:43:30] - Picks and Plugs Introduction[00:44:11] - James Cowling's Picks and Plugs[00:45:41] - Amy Dutton's Picks and Plugs[00:48:28] - James Quick's Picks and Plugs

DevTalles
161- Supabase - Una alternativa a Firebase

DevTalles

Play Episode Listen Later Apr 28, 2024 39:26


En este episodio quiero que hablemos sobre Supabase, y explicar un pequeño ejercicio que involucra magic links (autenticación mediante enlace al correo electrónico), actualización de perfil y carga de archivo --- Support this podcast: https://podcasters.spotify.com/pod/show/fernando-her85/support

The Mobile User Acquisition Show

In this episode, we delve into the intricacies of Google UAC strategies on iOS post Apple's introduction of App Tracking Transparency (ATT). We love this episode as this is from an ex Googler who gives us the inside track into the dynamics at play within Google that shaped a lot of the changes that happened post ATT. Ashley Black describes details about Google's initial response to ATT, and talks about modeled conversions. We also uncover the challenges and limitations Google faces with iOS ad inventory, particularly the inability to prompt for ATT in browsers and the significant enhancements brought about by SKAN4, including web-to-app tracking. Tune in to gain a deeper understanding of how Google maneuvers through changes like those that happened with ATT on iOS - and, more importantly, how you can adapt to a post-ATT world on iOS with UAC.KEY HIGHLIGHTS

Data Engineering Podcast
Making Email Better With AI At Shortwave

Data Engineering Podcast

Play Episode Listen Later Apr 21, 2024 53:43


Summary Generative AI has rapidly transformed everything in the technology sector. When Andrew Lee started work on Shortwave he was focused on making email more productive. When AI started gaining adoption he realized that he had even more potential for a transformative experience. In this episode he shares the technical challenges that he and his team have overcome in integrating AI into their product, as well as the benefits and features that it provides to their customers. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster (https://www.dataengineeringpodcast.com/dagster) today to get started. Your first 30 days are free! Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst (https://www.dataengineeringpodcast.com/starburst) and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Andrew Lee about his work on Shortwave, an AI powered email client Interview Introduction How did you get involved in the area of data management? Can you describe what Shortwave is and the story behind it? What is the core problem that you are addressing with Shortwave? Email has been a central part of communication and business productivity for decades now. What are the overall themes that continue to be problematic? What are the strengths that email maintains as a protocol and ecosystem? From a product perspective, what are the data challenges that are posed by email? Can you describe how you have architected the Shortwave platform? How have the design and goals of the product changed since you started it? What are the ways that the advent and evolution of language models have influenced your product roadmap? How do you manage the personalization of the AI functionality in your system for each user/team? For users and teams who are using Shortwave, how does it change their workflow and communication patterns? Can you describe how I would use Shortwave for managing the workflow of evaluating, planning, and promoting my podcast episodes? What are the most interesting, innovative, or unexpected ways that you have seen Shortwave used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shortwave? When is Shortwave the wrong choice? What do you have planned for the future of Shortwave? Contact Info LinkedIn (https://www.linkedin.com/in/startupandrew/) Blog (https://startupandrew.com/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. Links Shortwave (https://www.shortwave.com/) Firebase (https://firebase.google.com/) Google Inbox (https://en.wikipedia.org/wiki/Inbox_by_Gmail) Hey (https://www.hey.com/) Ezra Klein Hey Article (https://www.nytimes.com/2024/04/07/opinion/gmail-email-digital-shame.html) Superhuman (https://superhuman.com/) Pinecone (https://www.pinecone.io/) Podcast Episode (https://www.dataengineeringpodcast.com/pinecone-vector-database-similarity-search-episode-189/) Elastic (https://www.elastic.co/) Hybrid Search (https://weaviate.io/blog/hybrid-search-explained) Semantic Search (https://en.wikipedia.org/wiki/Semantic_search) Mistral (https://mistral.ai/) GPT 3.5 (https://platform.openai.com/docs/models/gpt-3-5-turbo) IMAP (https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)

App Masters - App Marketing & App Store Optimization with Steve P. Young

Dive into the intricacies of campaign optimization as we explore strategies to attract the most valuable users. Discover how prioritizing high-value users impacts your cost per install (CPI) and unlocks the potential for top-tier paying users. The Setup Phase: Mobilizing Machine Learning with Mobile App Campaigns Learn the cost-effective approach of kickstarting machine learning with mobile app campaigns. Understand the significance of importing events from your mobile measurement partner (MMP) or Firebase into platforms like Google Ads to pinpoint user-triggered events and strategically detect drop-offs. Evolution to App Event Optimization Campaigns Explore the transition to advanced app event optimization campaigns, focusing on specific high-engagement events. Delve into the analysis of campaign performance against profitability goals and the importance of streamlining event selection for clarity and effectiveness. Simplicity is Key: Streamlining Event Selection Uncover the power of simplicity in event selection, with a spotlight on essential events like purchases. Avoid the pitfalls of unnecessary complexity to maintain campaign clarity and enhance profitability. Instagram Ads Effectiveness: Target Audience Insights Get insights into the effectiveness of Instagram ads by considering the age demographics of your target audience. Understand why Facebook might outperform Instagram for an older audience and how static banners influence algorithmic targeting. In Conclusion: Navigating Complexity for Optimal Performance Wrap up the discussion with a holistic view of navigating campaign optimization. From mobile app campaigns to sophisticated event-based strategies, learn how to refine your campaigns and achieve optimal performance. You can also watch the video: https://youtu.be/9G_lK37G6Ys Work with us to grow your apps faster & cheaper: ⁠http://www.appmasters.com/⁠ SPONSORS Tired of overpaying for App Store Optimization? Get unlimited ASO and app marketing support to increase your keyword rankings, downloads, and revenue. Learn more at ⁠ASO Masters⁠. *************** Follow us: YouTube: ⁠AppMasters.com/YouTube⁠ Instagram: ⁠@stevepyoung⁠ Twitter: ⁠@stevepyoung⁠ TikTok: ⁠@stevepyoung⁠ Facebook: ⁠App Masters⁠ *************** --- Send in a voice message: https://podcasters.spotify.com/pod/show/app-marketing-podcast/message

The Gradient Podcast
Andrew Lee: How AI will Shape the Future of Email

The Gradient Podcast

Play Episode Listen Later Apr 4, 2024 63:40


In episode 118 of The Gradient Podcast, Daniel Bashir speaks to Andrew Lee.Andrew is co-founder and CEO of Shortwave, a company dedicated to building a better product experience for email, particularly by leveraging AI. He previously co-founded and was CTO at Firebase.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:43) Andrew's previous work, Firebase* (04:48) Benefits of lacking experience in building Firebase* (08:55) On “abstract reasoning” vs empirical capabilities* (10:30) Shortwave's AI system as a black box* (11:55) Motivations for Shortwave* (17:10) Why is Google not innovating on email?* (21:53) Shortwave's overarching product vision and pivots* (27:40) Shortwave AI features* (33:20) AI features for email and security concerns* (35:45) Shortwave's AI Email Assistant + architecture* (43:40) Issues with chaining LLM calls together* (45:25) Understanding implicit context in utterances, modularization without loss of context* (48:56) Performance for AI assistant, batching and pipelining* (55:10) Prompt length* (57:00) On shipping fast* (1:00:15) AI improvements that Andrew is following* (1:03:10) OutroLinks:* Andrew's blog and Twitter* Shortwave* Introducing Ghostwriter* Everything we shipped for AI Launch Week* A deep dive into the world's smartest email AI Get full access to The Gradient at thegradientpub.substack.com/subscribe

VC10X - Venture Capital Podcast
Founder10X - How to build world-class products? - Adam Nash, Founder & CEO Daffy

VC10X - Venture Capital Podcast

Play Episode Listen Later Apr 4, 2024 56:46


Adam Nash is the former CEO of Wealthfront and current Founder and CEO of Daffy. Prior to Wealthfront & Daffy, Adam has a strong operational experience holding executive positions at LinkedIn, Ebay, Dropbox and Apple. Adam is also an angel investor in Firebase (sold to Google), Opendoor (Ticker: OPEN), Figma, Gusto, and several other startups. To top it all off Adam is an Adjunct Professor at Stanford University where he teaches a class on personal finance. This episode is divided into 2 halves, the 1st half focusing on Adam's insights on angel investing in some highly successful companies, while the 2nd half focuses on Adam's founder journey and insights on how to build a world-class product. We also talk about his investment into Figma, key metrics to track for early-stage startups, how to create customer delight, and lessons from his stint at top companies like eBay, Dropbox, LinkedIn, and Apple. This episode is a wealth of wisdom for all founders on how to build world-class products & companies. Timestamps: 00:00 - Intro clip 00:53 - Episode intro 02:02 - Sponsored by Podcast10x - podcasting agency for VCs & startups 03:43 - Background story of Adam Nash 05:46 - How Adam started angel investing in startups? 09:13 - Investing in Figma at the early stage 14:30 - Leveraging the network & operator experience to get access to quality deals 16:26 - How Adam evaluates a deal? 20:38 - Why he founded Daffy? 24:23 - How to build world-class products? 29:00 - Virality factor in products 37:24 - Customer delight factor in products 42:00 - How to get world-class people to work with you? 46:46 - Key metrics to track for early stage startups 51:44 - Lessons from working at Linkedin, eBay, Dropbox & Apple 55:27 - How to connect with Adam and learn more about Daffy? Links: ⭐ Sponsor: End-to-end podcasting agency for VCs & startups - https://podcast10x.com Follow Adam on LinkedIn - https://www.linkedin.com/in/adamnash Follow Adam on X - https://x.com/adamnash

The CyberWire
Safeguarding American data from foreign hands.

The CyberWire

Play Episode Listen Later Mar 21, 2024 42:44


The House Unanimously Passes a Bill to Halt Sale of American Data to Foreign Foes. The U.S. Sanctions Russian Individuals and Entities for a Global Disinformation Campaign. China warns of cyber threats from foreign hacking groups. A logistics firm isolates its Canadian division after a cyber attack. Ivanti warns of another critical vulnerability. Researchers find hundreds of vulnerable Firebase instances. Microsoft phases out weaker encryption. Formula One fans fight phishing in the fast lane. Glassdoor is accused of adding real names to profiles without user consent. Our guest is Adam Meyers, SVP of Counter Adversary Operations at CrowdStrike, discussing how adversaries are attacking cloud environments and why it's an increasingly popular attack surface. And Pwn2Own winners take home their second Tesla.  Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Guest Adam Meyers, SVP of Counter Adversary Operations at CrowdStrike, discussing how adversaries are attacking cloud environments and why it's an increasingly popular attack surface – especially as more companies implement AI. For more information, check out CrowdStrike's 2024 Global Threat Report.  Selected Reading House unanimously passes bill to block data brokers from selling Americans' info to foreign adversaries (The Record) Treasury Sanctions Actors Supporting Kremlin-Directed Malign Influence Efforts (US Treasury Department) China warns foreign hackers are infiltrating ‘hundreds' of business and government networks (SCMP) International freight tech firm isolates Canada operations after cyberattack (The Record) Ivanti urges customers to fix critical RCE flaw in Standalone Sentry solution (Security Affairs) 19 million plaintext passwords exposed by incorrectly configured Firebase instances (Malwarebytes) Microsoft deprecates 1024-bit Windows RSA keys — now would be a good time to get machine identity management in order (ITPro) Users ditch Glassdoor, stunned by site adding real names without consent (Ars Technica) Famous Spa GP F1 race comms hijacked by phishing scammers (Cyber Daily) Security Researchers Win Second Tesla At Pwn2Own (Infosecurity Magazine) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © 2023 N2K Networks, Inc.

Constant Variables
131: How to use dynamic links (and what to do if you have Firebase)

Constant Variables

Play Episode Listen Later Mar 21, 2024 13:09


Dynamic links are intrinsic to marketing campaigns, and Google's discontinuation of its popular Firebase Dynamic Links tool has businesses concerned about the status of their URLs.   JMG CTO, Tom Whitten, joins Michael Roth to address the implications of Firebase no longer supporting dynamic links come August 2025 and why business owners should start migrating their links to another service provider right away.   **Show Links**   Find full show notes at https://constantvariables.co     Chat with JMG about dynamic links | https://jmg.mn/chat    Tom Whitten on LinkedIn | https://www.linkedin.com/in/tom-whitten-5857a52b0/    Michael Roth on LinkedIn | https://www.linkedin.com/in/michael-roth-508772183/  Learn more about The Jed Mahonis Group | https://jmg.mn  

Cyber Security Today
Cyber Security Today, March 20, 2024 - Misconfigured Firebase instances are leaking passwords, a China-related threat actor is hacking governments and more

Cyber Security Today

Play Episode Listen Later Mar 20, 2024 7:22


This episode reports on new backdoors, a new paper giving advice to OT network operators and more

Transatlantic Cable Podcast
The Transatlantic Cable Podcast #339

Transatlantic Cable Podcast

Play Episode Listen Later Mar 20, 2024 23:14


Episode 339 of the Transatlantic Cable podcast kicks off with news that several employees in TikTok were caught covertly spying on Forbes journalists. From there, the team talk about a new cooperation between governments to better tackle spyware and news that the FTC is looking at the upcoming Reddit IPO and AI training data. To close out the podcast, the team discuss news that ‘at least 900' websites built using Google's FireBase cloud database may be leaking sensitive user data. If you liked what you heard, please consider subscribing. TikTok Spied On Forbes Journalists Finland, Germany, Ireland, Japan, Poland, South Korea added to US-led spyware agreement FTC investigating Reddit plan to sell user content for AI model training 900+ websites and expose millions of passwords via Firebase  

All JavaScript Podcasts by Devchat.tv
Navigating Web Development Challenges - JSJ 624

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Mar 18, 2024 71:33


Shay Davidson is a full-stack web, mobile, and game developer. He is currently leading the front end at Lemonade. The discussion revolves around the use of Supabase as a free database and its comparisons to Firebase for developer experience. They dive into building applications with Next.js and React 18, utilizing React Server Components to interact with the Supabase API. They share their experiences, frustrations, and insights regarding caching mechanisms, server actions, and the challenges of adapting to new technologies in the React ecosystem. The episode also delves into the React server components controversy, the importance of learning and experimenting with new technologies, the use of AI for creative purposes, and the potential dangers of deep fakes.SponsorsChuck's Resume TemplateDeveloper Book ClubBecome a Top 1% Dev with a Top End Devs MembershipSocialsLinkedIn: Shay DavidsonPicksAJ - Dune: Part Two (2024)Dan - Arnold Schwarzenegger Sings About Rainbows (AI)Dan - Finance worker pays out $25 million after video call with deepfake CFOShai - Rendezvous with RamaSupport this podcast at — https://redcircle.com/javascript-jabber/donationsPrivacy & Opt-Out: https://redcircle.com/privacy

How About Tomorrow?
Dax's Vision Pro Review, Cloud Pricing, Rebuilding Plex, and Moving to an AI Utopia

How About Tomorrow?

Play Episode Listen Later Feb 19, 2024 83:47 Transcription Available


Adam's brain gets thrown off course, software rot vs serverless life, subscription payments vs one time payment myths, framework devs vs app devs, Adam figures out what Supabase is, Dax reviews an Apple Vision Pro and has thoughts, all while we're headed towards an AI utopia.Want to carry on the conversation? Join us in Discord.TailwindOncePlexGitHub - benvinegar/counterscale: Scalable web analytics you run yourself on CloudflareAstroBulk Cloud Email Service - Amazon Simple Email Service - AWSConnect, Protect and Build Everywhere | CloudflareVercel: Build and deploy the best Web experiences with The Frontend Cloud – VercelSupabase | The Open Source Firebase AlternativeFirebase | Google's Mobile and Web App Development PlatformFabrice BellardJonny KimThe Free Software Media System | JellyfinZuckerberg says Quest 3 is ‘the better product' vs. Apple's Vision Pro - The VergeMr. Robot‎Gemini - chat to supercharge your ideasReact Miami(00:00) - Chris will definitely cut this out (00:40) - Getting thrown off your week by issues (03:39) - Software rot vs serverless life (07:59) - Subscriptions vs boxed version (16:49) - Framework devs vs app devs (23:27) - Why can't cloud providers bake in dollar limits? (27:27) - Marker 7 (33:42) - What do you do if you're getting ddos'd? (38:40) - Cloudflare's origin story (39:49) - What's Supabase? What's Firebase? (42:42) - Reengineering Plex (44:48) - Take a visit to Fabrice Bellard's Wikipedia (50:43) - Dax's review of Apple Vision Pro (01:08:41) - Heading towards an AI utopia (01:19:35) - Adam's weight is not a static number and is always fluctuating

Software Engineering Daily
Supabase Security with Inian Parameshwaran

Software Engineering Daily

Play Episode Listen Later Dec 20, 2023 57:46


Supabase is an open source backend-as-a-service platform and competes directly with Google's Firebase. A key distinction between them is that Firebase is a document store, while Supabase uses Postgres, which is a SQL-based database management system. Software Engineering Daily last covered Supabase in 2020 when its Founder Paul Copplestone came on the show, and a The post Supabase Security with Inian Parameshwaran appeared first on Software Engineering Daily.

Giant Robots Smashing Into Other Giant Robots
503: Epic Web and Remix with Kent C. Dodds

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Dec 7, 2023 67:15


Kent C. Dodds, a JavaScript engineer and teacher known for Epic Web Dev and the Remix web framework, reflects on his journey in tech, including his tenure at PayPal and his transition to full-time teaching. Kent's passion for teaching is a constant theme throughout. He transitioned from corporate roles to full-time education, capitalizing on his ability to explain complex concepts in an accessible manner. This transition was marked by the creation of successful online courses like "Testing JavaScript and Epic React," which have significantly influenced the web development community. An interesting aspect of Kent's career is his involvement with Remix, including his decision to leave Shopify (which acquired Remix) to return to teaching, which led to the development of his latest project, Epic Web Dev, an extensive and innovative web development course. This interview provides a comprehensive view of Kent C. Dodds's life and career, showcasing his professional achievements in web development and teaching, his personal life as a family man, and his unique upbringing in a large family. Epic Web (https://www.epicweb.dev/) Remix (https://remix.run/) Follow Kent C. Dodds on LinkedIn (https://www.linkedin.com/in/kentcdodds/) or X (https://twitter.com/kentcdodds). Visit his website at kentcdodds.com (https://kentcdodds.com/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: WILL: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Will Larry. And with me today is Kent C. Dodds. Kent is a JavaScript engineer and teacher. He has recently released a massive workshop called epicweb.dev. And he is the father of four kids. Kent, thank you for joining me. KENT: Thank you so much for having me. It's an honor to be here. WILL: Yeah. And it's an honor for me to have you. I am a huge fan. I think you're the one that taught me how to write tests and the importance of it. So, I'm excited to talk to you and just pick your brain and learn more about you. KENT: Oh, thank you. WILL: Yeah. So, I just want to start off just: who is Kent? What do you like to do? Tell us about your family, your hobbies, and things like that. KENT: Yeah, sure. So, you mentioned I'm the father of four kids. That is true. We are actually expecting our fifth child any day now. So, we are really excited to have our growing family. And when I'm not developing software or material for people to learn how to develop software, I'm spending time with my family. I do have some other hobbies and things, but I try to share those with my family as much as I can. So, it's starting to snow around here in Utah. And so, the mountains are starting to get white, and I look forward to going up there with my family to go skiing and snowboarding this season. During the summertime, I spend a lot of time on my one-wheel just riding around town and bring my kids with me when I can to ride bikes and stuff, too. So, that's sort of the personal side of my life. And then, professionally, I have been in this industry developing for the web professionally for over a decade. Yeah, web development has just worked out super well for me. I kind of focused in on JavaScript primarily. And when I graduated with a master's degree in Information Systems at Brigham Young University, I started working in the industry. I bounced around to a couple of different companies, most of them you don't know, but you'd probably be familiar with PayPal. I was there for a couple of years and then decided to go full-time on teaching, which I had been doing as, like, a part-time thing, or, like, on the side all those years. And yeah, when teaching was able to sustain my family's needs, then I just switched full-time. So, that was a couple of years ago that I did that. I think like, 2018 is when I did that. I took a 10-month break to help Remix get off the ground, the Remix web framework. They got acquired by Shopify. And so, I went back to full-time teaching, not that I don't like Shopify, but I felt like my work was done, and I could go back to teaching. So, that's what I'm doing now, full-time teacher. WILL: Wow. Yes, I definitely have questions around that. KENT: [laughs] Okay. WILL: So many. But I want to start back...you were saying you have four kids. What are their ages? KENT: Yeah, my oldest is 11, youngest right now is 6, and then we'll have our fifth one. So, all four of the kids are pretty close in age. And then my wife and I thought we were done. And then last December, we kind of decided, you know what? I don't think we're done. I kind of think we want to do another. So, here we go. We've got a larger gap between my youngest and the next child than we have between my oldest and the youngest child. WILL: [chuckles] KENT: So, we're, like, starting a new family, or [laughs] something. WILL: Yeah [laughs]. I just want to congratulate you on your fifth child. That's amazing. KENT: Thank you. WILL: Yeah. How are you feeling about that gap? KENT: Yeah, we were pretty intentional about having our kids close together because when you do that, they have built-in friends that are always around. And as they grow older, you can do the same sorts of things with them. So, like, earlier this year, we went to Disneyland, and they all had a great time. They're all at the good age for that. And so, they actually will remember things and everything. Yeah, we were pretty certain that four is a good number for us and everything. But yeah, we just started getting this nagging feeling we wanted another one. So, like, the fact that there's a big gap was definitely not in the plan. But I know a lot of people have big gaps in their families, and it's just fine. So, we're going to be okay; just it's going to change the dynamic and change some plans for us. But we're just super excited to have this next one. WILL: I totally understand what you mean by having them close together. So, I have three little ones, and my oldest and my youngest share the same exact birthday, so they're exactly three years apart. KENT: Oh, wow. Yeah, that's actually...that's fun. My current youngest and his next oldest brother are exactly two years apart. They share the same birthday, too [laughs]. WILL: Wow. You're the first one I've heard that their kids share a birthday. KENT: Yeah, I've got a sister who shares a birthday with her son. And I think we've got a couple of birthdays that are shared, but I also have 11 brothers and sisters [laughs]. And so, I have got a big family, lots of opportunity for shared birthdays in my family. WILL: Yeah, I was actually going to ask you about that. How was it? I think you're the 11th. So, you're the youngest of 11? KENT: I'm the second youngest. So, there are 12 of us total. I'm number 11. WILL: Okay, how was that growing up with that many siblings? KENT: I loved it. Being one of the youngest I didn't really...my experience was very different from my older siblings. Where my older siblings probably ended up doing a fair bit of babysitting and helping around the house in that way, I was the one being babysat. And so, like, by the time I got to be, like, a preteen, or whatever, lots of my siblings had already moved out. I was already an uncle by the time I was six. I vaguely remember all 12 of us being together, but most of my growing up was just every other year; I'd have another sibling move out of the house, which was kind of sad. But they'd always come back and visit. And now I just have an awesome relationship with every one of my family members. And I have something, like, 55 nieces and nephews or more. Yeah, getting all of us together every couple of years for reunions is really a special experience. It's a lot of fun. WILL: Yeah. My mom, she had 12 brothers and sisters. KENT: Whoa. WILL: And I honestly miss it because we used to get together all the time. I used to live a lot closer. Most of them are in Louisiana or around that area, and now I'm in South Florida, so I don't get to see them as often. But yeah, I used to love getting together. I had so many cousins, and we got in so much trouble...and it was -- KENT: [laughs] WILL: We loved it [laughs]. KENT: Yeah, that's wonderful. I love that. WILL: Yeah. Well, I want to start here, like, how did you get your start? Because I know...I was doing some research, and I saw that, at one point, you were an AV tech. You were a computer technician. You even did maintenance. Like, what was the early start of your career like, and how did you get into web dev? KENT: I've always been very interested in computers, my interest was largely video games. So, when I was younger, I had a friend who was a computer programmer or, like, would program stuff. We had visions of...I don't know if you're familiar with RuneScape, but it's this game that he used to play, and I would play a little bit. It was just a massive online multiplayer game. And so, we had visions of building one of those and having it just running in the background, making us money, as if that's how that works [laughter]. But he tried to teach me programming, and I just could not get it at all. And so I realized at some point that playing video games all the time wasn't the most productive use of my time on computers, and if I wanted my parents to allow me to be on computers, I needed to demonstrate that I could be productive in learning, and making things, and stuff. So, I started blogging and making videos and just, like, music videos. My friend, who was the programmer, he was into anime, or anime, as people incorrectly pronounce it. And [laughs] there was this website called amv.com or .org or something. It's Anime Music Videos. And so, we would watch these music videos. And I'd say, "I want to make a music video with Naruto." And so, I would make a bunch of music videos from the Naruto videos I downloaded, and that was a lot of fun. I also ran around with a camera to do that. And then, with the blog, I wrote a blog about Google and the stuff that Google was, like, doing because I just thought it was a fascinating company. I always wanted to work at Google. In the process of, like, writing the blog, I got exposed to CSS and HTML, but I really didn't do a whole lot of programming. I also did a little bit of Google Docs. Spreadsheets had some JavaScript macros-type things that you could do. So, I did a little bit of that, but I never really got too far into programming. Then I go to college, I'm thinking, you know what? I think I want to be a video editor. I really enjoy that. And so, my brother, who at the time was working at Micron, he did quality assurance on the memory they were making. So, he would build test automation, software and hardware for testing the memory they build. And so, he recommended that I go into electrical engineering. Because what he would say is, "If you understand computers at that foundational level, you can do anything with computers." And I'd say, "Well, I like computers. And if I go into video editing, I'm going to need to understand computers, too. So yeah, sure, let's let's do that." I was also kind of interested in 3D animation and stuff like that, too. Like, I wasn't very good at it, but I was kind of interested in that, too. So, I thought, like, having a really good foundation on computers would be a good thing for me. Well, I was only at school for a semester when I took a break to go on a mission for my church [inaudible 09:42] mission. And when I got back and started getting back into things, I took a math refresher course. That was, like, a half a credit. It wasn't really a big thing, but I did terrible in it. I did so bad. And it was about that time that I realized, you know what? I've been thinking my whole life that I'm good at math. And just thinking back, I have no idea why or any justification for why I thought I was good at math because in high school, I always struggled with it. I spent so much time with it. And in fact, my senior year, I somehow ended up with a free period of nothing else to do. I don't know how this happened. But, I used that free period to go to an extra edition of my calculus class. So, I was going to twice as much calculus working, like, crazy hard and thinking that I was good at this, and I superduper was not [laughter]. And so, after getting back from my mission and taking that refresher course, I was like, you know what? Math is a really important part of engineering, and I'm not good at it at all, obviously. And so, I've got to pivot to something else. Well, before my mission, as part of the engineering major, you needed to take some programming classes. So, there was a Java programming class that I took and a computer systems class that included a lot of programming. The computer systems was very low level, so we were doing zeros and ones. And I wrote a program in zeros and ones. All that it did was it would take input from the keyboard, and then spit that back out to you as output. That was what it did. But still, you know, many lines of zeros and ones and just, like, still, I can't believe I did that [laughter]. And then we upgraded from that to Assembly, and what a godsend that was [laughs], how wonderful Assembly was after working in machine code. But then we upgraded from that to C, and that's as far as that class went. And then, yeah, my Java class, we did a bunch of stuff. And I just remember thinking or really struggling to find any practicality to what we were doing. Like, in the Java class, we were implementing the link to list data structure. And I was like, I do not care about this. This does not make any sense. Why should I care? We were doing these transistor diagrams in the computer systems class. And why do I care about that? I do not care about this at all. Like, this is not an interesting thing for me. So, I was convinced computer programming was definitely not what I wanted to do. So, when I'm switching from electrical engineering, I'm thinking, well, what do I do? And my dad convinced me to try accounting. That was his profession. He was a certified public accountant. And so, I said, "Okay, I'll try that." I liked the first class, and so I switched my major to go into the business school for accounting. I needed to take the next accounting class, and I hated that so much. It was just dull and boring. And I'm so glad that I got out of that because [laughs] I can't imagine doing anything like that. WILL: [laughs] KENT: But as part of switching over to business school, I discovered information systems. What's really cool about that is that we were doing Excel spreadsheets and building web pages. But it was all, like, with a practical application of business and, like, solving business problems. And then, I was like, oh, okay, so I can do stuff with computers in a practical setting, and that's what got me really interested. So, I switched, finally, to information systems–made it into that program. And I was still not convinced I wanted to do programming. I just wanted to work with computers. What ended up happening is the same time I got into the information systems program, I got married to my wife, and then I got this part-time job at a company called the More Good Foundation. It's a non-profit organization. And one of my jobs was to rip DVDs and upload those videos to YouTube, and then also download videos from one site and upload those to YouTube as well. And so, I was doing a lot of stuff with YouTube and video stuff. And as part of my information systems class, I was taking another Java class. At that same time, I was like, you know, what I'm doing at work is super boring. Like, can you imagine your job is to put in a [inaudible 13:45] and then click a couple of buttons? And, like, it was so boring and error-prone, too. Like, okay, now I've got to type this out and, you know, I got to make sure it's the same, try and copy-paste as much as I can. And it was not fun. And so, I thought, well, I'm pretty sure there are pieces of this that I could automate. And so, with the knowledge that I was getting in my information systems programming class, that was another Java class, I decided to write a program that automated a bunch of my stuff. And so, I asked my boss, like, "Can I automate this with writing software?" And I'm so glad that they said I could. WILL: [laughs] KENT: Because by the end of it, I had built software that allowed me to do way more than I ever could have before. I ended up uploading thousands of videos to their YouTube channels, which would have taken years to do. And they ended up actually being so happy with me. They had me present to the board of directors when they were asking for more money [laughs] and stuff. And it was really awesome. But still, I was not interested in being a programmer. Programming, to me, was just a means to an end. WILL: Oh, wow. KENT: Yeah, I guess there was just something in me that was like, I am not a programmer. So, anyway, further into the program of information systems, I interned as a business intelligence engineer over that next summer, and I ended up staying on there. And while I was supposed to be a business intelligence engineer, I did learn a lot about SQL, and star schema, and denormalized databases to optimize for read speed and everything. I learned a lot about that. But I just kept finding myself in positions where I would use my programming experience to automate things that were problematic for us in the business realm. And this was all still Java. It was there that I finally realized, you know what? I think I actually do want to be a programmer. I actually really do enjoy this. And I like that it's practical, and it makes sense for me, so… WILL: What year was that? KENT: That would have been 2012. Then I got a new job where my job was actually to be a programmer at a company called Domo, where they do business intelligence, actually. So, it got my foot in the door a little bit since I was a business intelligence engineer already. I got hired on, actually, as a QA engineer doing automated testing, but I never really got into that. And they shifted me over pretty quick into helping with the web app. And that is when I discovered JavaScript, and the whole, like, everything flooded out from there. I was like, wow, I thought I liked programming, but I had no idea how fun it could be. Because I felt like the chains had been broken. I no longer have to write Java. I can write JavaScript, and this was just so much better. WILL: [laughs] KENT: And so, yeah, I was there for a year and a half before I finally graduated. And I took a little break to work at USAA for a summer internship. And when I came back, I had another year and then converted to full-time. And so, yeah, there's my more detail than you were probably looking for, story of how I got into programming [laughs]. WILL: No, I actually love it because like I said, I've used your software, your teachings, all that. And it's amazing to hear the story of how you got there. Because I feel like a lot of times, we just see the end result, but we don't know the struggle that you went through of even trying to find your way through what your purpose was, what you're trying to do. Because, at one point, you said you were trying to do accounting, then you were trying to do something else. So, it's amazing to see, like, when it clicked for you when you got into JavaScript, so that's amazing. KENT: Yeah, it is kind of funny to think, like, some people have the story of, like, I knew I wanted to be a programmer from the very beginning, and it's just kind of funny for me to think back and, like, I was pretty certain I didn't want to be a programmer. WILL: [laughs] KENT: Like, not only did I, like, lots of people will say, "I never really thought about it, and then I saw it, and it was great." But I had thought about it. And I saw it, and I thought it was awful [laughter]. And so, yeah, I'm really glad that it worked out the way it did, though, because programming has just been a really fun thing. Like, I feel so blessed to be doing something that I actually enjoy doing. Like so many of our ancestors, they would go to work because they cared about their family and they just wanted to feed their family. I'm so grateful to them for doing that. I am so lucky that I get to go to work to take care of my family, but also, I just love doing it. WILL: Yeah, I feel the same way, so yeah, totally agree. After you found out about JavaScript, when did you figure out that you want to teach JavaScript? What was that transition like? KENT: I've been teaching for my whole life. It's ingrained in my religion. Even as a kid, you know, I'd prepare a talk, a five-minute talk, and stand up in front of 30 of my peers. And even when you're an early teenager, you get into speaking in front of the entire congregation. It took a while before I got good enough at something, enough hubris to think that people would care about what I have to say -- WILL: [laughs] KENT: Outside of my religion where, like, they're sitting there, and I've been asked to speak, and so they're going to listen to me. And so, when I started getting pretty good at programming, I decided, hey, I want to teach this stuff that I'm learning. And so, when I was still at school and working at Domo, the business intelligence company, one of our co-workers, Dave Geddes, he put together a workshop to teach AngularJS because we were migrating from Backbone to Angular. And I asked him if I could use his workshop material to teach my classmates. This was, like, soon after ng-conf, the first ng-conf, which my co-workers at Domo actually put on. So, I wasn't involved in the organization, but I was very much present when it was being organized. I attended there and developed a relationship with Firebase with the people there. I was actually...they had a developer evangelist program, which they called Torchbearers or something. And actually, that was my idea to call them Torchbearers. I think they wanted to call us torches, and I'm like, that just doesn't make sense. WILL: [laughs] KENT: I developed a relationship with them. And I asked them, "Hey, I want to teach my classmates AngularJS. Would you be interested in sponsoring some pizza and stuff?" And they said, "Yeah, we'll send you stickers, and hot sauce, and [laughs] a bunch of..." Like, they sent us, like, headphones [laughs] and stuff. So, I was like, sweet. I taught my classmates AngularJS in a workshop, brought a bunch of pizza, and it was, you know, just an extracurricular thing. And actually, the recording is still on my YouTube channel, so if you want to go look at one of my early YouTube videos. I was very into publishing video online. So, if you are diligent, you'll be able to find some of my very early [laughter] videos from my teenage years. But anyway, so, yes, I've been teaching since the very beginning. As soon as I graduated from college, I started speaking at meetups. I'd never been to a meetup before, and I just saw, oh, they want a speaker. I can talk about something. WILL: Wow. KENT: And not realizing that, like, meetups are literally always looking for speakers. This wasn't some special occasion. WILL: [laughs] KENT: And one of the meetups I spoke at was recorded and put on YouTube. And the guy who started Egghead io, John Lindquist, he is local here in Utah. And he saw that I spoke at that meetup, but he wasn't able to attend. So, he watched the recording, and he thought it was pretty good. He thought I would do a good job turning that into a video course. And that first video course paid my mortgage. WILL: Wow. KENT: And I was blown away. This thing that I had been doing just kind of for fun speaking at meetups, and I realized, oh, I can actually, like, make some legit good money out of this. From there, I just started making more courses on the side after I put the kids to bed. My wife is like, "Hey, I love you, but I want you to stay away for now because I've just been with these tiny babies all day. WILL: [laughs] KENT: And I just need some alone time." WILL: Yes. KENT: And so, I was like, okay. WILL: [laughs] KENT: I'll just go and work on some courses. And so, I spent a lot of time for the next couple of years doing course material on the side. I reached out to Frontend Masters and just told them, "Hey, I've been doing courses for Egghead." I actually met Marc Grabanski at a conference a couple of years before. And so, we established a little bit of relationship. And I just said, "Hey, I want to come and teach there." So, I taught at Frontend Masters. I started putting on my own workshops at conferences. In fact, just a few months after graduating, I got accepted to speak at a conference. And only after I was accepted did I realize it was in Sweden [laughter]. I didn't think to look where in the world this conference was. So, that was my first international trip, actually, and I ended up speaking there. I gave, actually, two talks. One of them was a three-hour talk. WILL: Whoa. KENT: Which was, yeah, that was wild. WILL: [laughs] KENT: And then, yeah, I gave a two-day workshop for them. And then, I flew straight from there to Amsterdam to give another talk and also do a live in-person podcast, which I'd been running called ngAir, an Angular podcast. It just kept on building from there until finally, I created testingjavascript.com. And that was when I realized, oh, okay, so this isn't just a thing I can use to pay my mortgage, and that's nice. This is, like, a thing I can do full-time. Because I made more with Testing JavaScript than I made from my PayPal salary. WILL: Oh wow. KENT: I was like, oh, I don't need both of these things. I would rather work half as much one full-time job; that's what I want, one full-time job and make enough to take care of my family. And I prefer teaching. So, that's when I left PayPal was when I released Testing JavaScript. WILL: Wow. So, for me, I think so many times the imposter syndrome comes up whenever I want to teach or do things at the level you're saying you're doing. Because I love teaching. I love mentoring. I remember when I came into development, it was hard. I had to find the right person to help me mentor. So now, I almost made a vow to myself that if someone wants to learn and they're willing to put in the energy, I'm going to sit down however long it takes to help them because I remember how hard it was for me whenever I was doing it. So, you said in 2014, you were only a couple years doing development. How did you overcome impostor syndrome to stand in front of people, teach, go around the world, and give talks and podcasts? Like, how did you do that portion? KENT: Part of it is a certain level of hubris like I said. Like, you just have to be willing to believe that somebody's going to care. You know, the other part of it is, it's a secret to getting really, really good at something. They sometimes will say, like, those who can't do teach. That's total baloney because it requires a lot of being able to do to get you in a position where you can teach effectively. But the process of teaching makes you better at the process of doing as well. It's how you solidify your experience as a whatever. So, if you're a cook, you're really good at that; you will get better by teaching other people how to cook. There's an element of selfishness in what I do. I just want to get really, really good at this, and so I'm going to teach people so that I can. So yeah, I think there's got to be also, like, a little bit of thick skin, too, because people are going to maybe not like what you have to share or think that you're posing or whatever. Learn how to let that slide off you a little bit. But another thing is, like, as far as that's concerned, just being really honest about what your skill set is. So, if somebody asks me a question about GraphQL, I'm going to tell them, "Well, I did use GraphQL at PayPal, but I was pretty limited. And so, I don't have a lot of experience with that," and then I'll answer their question. And so, like, communicating your limitations of knowledge effectively and being okay being judged by people because they're going to judge you. It just is the way it is. So, you just have to learn how to cope well with that. There are definitely some times where I felt like I was in over my head on some subjects or I was involved in a conversation I had no business being there. I actually felt that a lot when I was sent as PayPal's delegate to the TC39 meetings. Wow, what am I doing here? I've only been in the industry for, like, two or three years at [laughter] that point. It takes a certain level of confidence in your own abilities. But also, like, being realistic about your inexperience as well, I think, is important too. WILL: Yeah, I know that you had a lot of success, and I want to cover that next. But were there any failures when you were doing those teaching moments? KENT: Years ago, Babel was still a new thing that everybody was using to compile their JavaScript with new syntax features down to JavaScript that the browser could run. There was ES Modules that was introduced, and lots of us were doing global window object stuff. And then we moved to, like, defining your dependencies with r.js or RequireJS. And then, there was CommonJS, and Universal Module Definition, and that sort of thing. So, ECMAScript modules were very exciting. Like, people were really interested in that. And so, Babel added support to it. It would compile from the module syntax down to whatever you wanted: CommonJS or...well, I'm pretty sure it could compile to RequireJS, but I compiled it to CommonJS. And so, there was a...yeah, I would say it's a bug in Babel at that time, where it would allow you to write your ES modules in a way that was not actually spec-compliant. It was incorrect. So, I would say export default some object, and then in another module, I would say import. And then, I'd select properties off of the object that I exported, that default I exported. That was allowed by Babel, but it is superduper, not how ECMAScript modules work. Well, the problem is that I taught, like, a ton of people how to use ECMAScript modules this way. And when I realized that I was mistaken, it was just, like, a knife to the heart because I was, like, I taught so many people this wrong thing. And so, I wrote a blog post about it. I gave a big, long talk titled “More Than You Want to Know About ECMAScript Modules,” where I talk about that with many other things as well. And so, yeah, just trying to do my part to make up for the mistake that I made. So yes, I definitely have had mistakes like that. There's also, like, the aspect that technology moves at a rapid pace. And so, I have old things that I would show people how to do, which they still work just as well as they worked back then. But I wouldn't recommend doing it that way because we have better ways now. For some people, the old way to do it is the only way they can do it based on the constraints they have and the tools that they're using and stuff. And so, it's not, like, it's not valuable at all. But it is a struggle to make sure that people understand that, like, this is the way that you do it if you have to do it this way, but, like, we've got better ways. WILL: I'm glad you shared that because it helps. And I love how you say it: when I make a mistake, I own up to it and let everyone know, "Hey, I made a mistake. Let's correct it and move on." So, I really like that. KENT: Yeah, 100%. MID-ROLL AD: Are your engineers spending too much time on DevOps and maintenance issues when you need them on new features? We know maintaining your own servers can be costly and that it's easy for spending creep to sneak in when your team isn't looking. By delegating server management, maintenance, and security to thoughtbot and our network of service partners, you can get 24x7 support from our team of experts, all for less than the cost of one in-house engineer. Save time and money with our DevOps and Maintenance service. Find out more at: tbot.io/devops. WILL: I want to go back to what you were saying. When you left PayPal, you released Testing JavaScript. How did you come up with the idea to write a Testing JavaScript course? And, two, how long did it take to take off and be successful? KENT: That was a pretty special thing, honestly. In 2018, I had put together a bunch of workshops related to testing. There was this conference called Assert(js) that invited me to come, taught them. In the year prior, I went to Midwest JS and taught how to test React. I had this material about testing. I'd gotten into testing just because of open-source stuff. I didn't want to have to manually go through all my stuff again every time I wanted to check for breakages and stuff, so that got me into testing. And whatever I'm into is what I'm going to teach. So, I started teaching that testing. And then my friend, Ryan Florence, put together...he separated from Michael Jackson with React Training, and built his own thing called Workshop.me. He asked me to join up with him. And he would, like, put together these workshops for me, and I would just...my job was just to show up and teach. And so, I did that. I have a picture, actually, in this blog post, The 2010s Decade in Review, of me in front of 60 people at a two-day workshop at Trulia in San Francisco. WILL: Oh, wow. KENT: And this is where I was teaching my testing workshop. Well, what's interesting about that photo is that two weeks before that, I had gotten really frustrated with the tool that everybody uses or used at the time for testing React, and that was Enzyme. And so I was preparing this workshop or working on it. I had already delivered it a number of times, but I was working on it, improving it, as I always do [laughs] when I'm preparing. WILL: [laughs] KENT: I can never give the same workshop twice, I guess. And I was just so frustrated that Enzyme was so difficult to work with. And, like, I was going to prepare this document that said, "Here are all the things you should never do with Enzyme. Like, Enzyme encourages you to do these things; you should not do these things. And let me explain why." And I just hated that I needed a document like that. And so, I tweeted, "I'm seriously starting to think that I should make my own very small testing lib and drop Enzyme entirely. Most of Enzyme's features are not at all useful and many damaging to my test bases. I'd rather have something smaller that encourages better practices." And so, I tweeted that March 15th, 2018. I did that. I did exactly that. What I often do in my workshops is I try to build the abstraction that we're going to use so that you can use it better. So, I was, like, building Enzyme, and I realized the jump between what I had built, the little utilities that I had built as part of the workshop, from that to Enzyme was just a huge leap. And so, I thought, you know what? These utilities that I have built to teach Enzyme are actually really good. What if I just turned that into a testing utility? And that became Testing Library, which, fast forward to today, is the number one testing library for React. And it's recommended for testing React, and Vue, and Angular. The ideas that are in Testing Library got adopted by Playwright. If you're writing tests for anything in the browser, you are very likely using something that was either originally developed by me or inspired by the work that I did. And it all came from that testing workshop that I was working on. So, with that, I had not only that testing workshop; I had a number of other workshops around testing. And so I approached Joel Hooks from Egghead.io. I say, "Hey, I'm getting ready to record a bunch of Egghead courses. I've got, like, six or seven courses I want to do." And he'd seen my work before, you know, I was a very productive course creator. And he said, "Hey, how about we, you know, we've been thinking about doing this special thing. How about we make a website just dedicated to your courses?" And I said, "That sounds great." I was a little bit apprehensive because I knew that putting stuff on Egghead meant that I had, like, a built-in audience and everything that was on Egghead, so this would be really the first time of me just branching out with video material on my own. Because, otherwise, if it wasn't Egghead, it was Frontend Masters, and there was the built-in audience there. But yeah, we decided to go for it. And we released it in, I think, November. And it was that first week...which is always when you make the most is during the launch period. But that launch week, I made more than my PayPal salary for the entire year. And so, that was when I realized, oh, yeah, okay, let's go full-time on this because I don't need two PayPal salaries. I just need one. And then I can spend more time with my family and stuff. And especially as the kids are getting older, they're staying up later, and I want to hang out with them instead of with my computer at night [laughter], and so... WILL: I love how you explain that because I came in around 2018, 2019. And I remember Enzyme, and it was so confusing, so hard to work with, especially for, you know, a junior dev that's just trying to figure it out. And I remember Testing JavaScript and then using that library, and it was just so much easier to, like, grab whatever you needed to grab. Those utils made the biggest difference, and still today, they make a huge difference. So yes, I just resonate with what you're saying. That's amazing. KENT: Aw, thank you so much. WILL: Yeah. You did Testing JavaScript. And then what was your next course that you did? KENT: I quit PayPal, go full-time teaching. That first year, I actually did an update to Testing JavaScript. There were a couple of changes in Testing Library and other things that I needed to update it for. And then I started working on Epic React. So, while I was doing all this testing stuff, I was also very into React, creating a bunch of workshops around that. I was invited to speak all over the world to talk about React. And I had a couple of workshops already for React. So, I was invited to give workshops at these conferences about React. And so, I thought, you know, let's do this again, and we'll do it with React this time. The other thing was, I'd never really planned on being the testing guy. It just kind of happened, and I actually didn't really like it either. I wanted to be more broad than just testing. So, that kind of motivated me to say, hey, let's do something with React to be a little bit more broad. Yeah, so I worked on putting those workshops together and delivered them remotely. And then, yeah, COVID hit, and just really messed everything up [laughs] really bad. So, I had everything done on my end for Epic React by March of 2020, which is, like, immediately after COVID got started, in the U.S. at least. And so, yeah, then we actually didn't end up releasing Epic React until October that year, which, honestly [laughs], was a little bit frustrating for me because I was like, "Hey, guys, I have recorded all the videos and everything. Can we get this released?" But, like, that just was a really rough year for everybody. But yeah, so Egghead got the site put together. I did a bunch of interviews and stuff. And then we launched in October of 2020. That was way bigger than Testing JavaScript because Testing JavaScript was still very informed by my experience as an Egghead instructor, which, typically, the Egghead courses are, like, a video where watch me do this thing, and then you'll learn something and go apply it to your own stuff. And that's kind of what Testing JavaScript was built as. But as part of the update of Testing JavaScript in 2019, I added another workshop module called Testing Node Applications. And in that one, I decided, hey, typically, I would have a workshop version of my material and a course version. The workshop version had like instructions and exercises. And the course version was no instructions or anything. It was just, like, watch these videos. And it was just me doing the exercises. And with the update of Testing JavaScript, I added that Testing Node workshop, and I said, hey, what if we just, like, embrace the fact that these are exercises, and it's just, like, me recording the workshop? How I would deliver the workshop? And so, I tested that out, and that went really well. And so, I doubled down on that with Epic React. And I said, okay, now, this isn't just, like, watch these videos. This is a do the exercise and then watch me do the exercise. So, Epic React was not only a lot more material but the format of the material was more geared for retention and true practice and learning. And so, Epic React ended up doing much better than Testing JavaScript, and even still, is still doing a remarkable job as far as course material is concerned. And, like, so many people are getting a lot of really great knowledge from Epic React. So yeah, very gratifying to have that. WILL: Once again, I've used Epic React. It's taught me so many...stretched me. And I do like the format, so yes, I totally agree with that, yeah. The next thing, Remix, correct? KENT: Yeah. So, how I got into Remix, around the same time we finished recording Epic React videos, I was doing some other stuff kind of to keep content going and stuff while we were waiting to launch Epic React. And around that same time, my friend Ryan Florence and Michael Jackson––they were doing the React training thing. And so, we were technically competitors. Like I said, Ryan and I kind of joined forces temporarily for his Workshop Me thing, but that didn't end up working out very well. And Michael really wanted Ryan back, and so they got back together. And their React training business went way better than it had before. They were hiring people and all sorts of stuff. And then, a training business that focuses on in-person training just doesn't do very well when COVID comes around. And so, they ended up having to lay off everybody and tried to figure out, okay, now what are we going to do? Our income has gone overnight. This is a bit of a simplification. But they decided to build software and get paid for it like one does. So, they started building Remix. Ryan, actually, around that time, moved back to Utah. He and I would hang out sometimes, and he would share what he was working on with Michael. We would do, like, Zoom calls and stuff, too. I just got really excited about what they were working on. I could see the foundation was really solid, and I thought it was awesome. But I was still working on Epic React. I end up launching Epic React. He launches Remix the very next month as a developer preview thing. Yeah, it definitely...it looked a lot like current Remix in some ways but very, very different in lots of others. But I was super hooked on that. And so, I paid for the developer preview and started developing my website with it. And around the next year in August, I was getting close to finishing my website. My website is, like, pretty legit. If you haven't gone to kentcdodds.com. Yet, it is cooler than you think it is. There's a lot that goes into that website. So, I had a team help me with the product planning and getting illustrations and had somebody help me implement the designs and all that stuff. It was a pretty big project. And then, by August of 2021, Ryan and I were talking, and I said, "Hey, listen, I want to update Epic React to use Remix because I just think that is the best way to build React applications. But I have this little problem where Remix is a paid framework. That's just going to really reduce the number of people who are interested in learning what I have to teach. And on top of that, like, it just makes it difficult for people to test things out." And so, he, around that time, was like, "Hey, just hold off a little bit. We've got some announcements." And so, I think it was September when they announced that they'd raised VC money and they were going to make Remix open source. That was when Ryan said, "Hey, listen, Kent, I think that it's awesome you want to update Epic React to use Remix. But the problem is that Remix isn't even 1.0 yet. The community is super small. It needs a lot of help. If you release a course on Remix right now, then you're not going to get any attention because, like, nobody even knows what it is." So, part of me is like, yeah, that's true. But also, the other part of me is like, how do people find out what it is [laughs] unless there's, like, material about it? But he was right. And he said, "Listen, we've got a bunch of VC money. I've always wanted to work with you. How about we just hire you? And you can be a full-time teacher about Remix. But you don't have to charge anything. You just, like, make a bunch of stuff for free about Remix." I said, "That sounds great. But, you know, to make that worth my while because I'm really happy with what I'm doing with this teaching thing, like, I'm going to need a lot of Remix." And so, Michael Jackson was like, "How about we just make you a co-founder, and we give you a lot of Remix?" And I said, "Okay, let's do this." And so I jumped on board with them as a year-delayed co-founder. I guess that's pretty common. But, like, that felt kind of weird to me [laughs] to be called a co-founder. But yeah, so I joined up with them. I worked on documentation a little bit, mostly community building. I ran Remix Conf. Shopify was interested in what we were doing. And we were interested in what Shopify was doing because, at the time, they were working on Hydrogen, which was one of the early adopters of React Server Components. And, of course, everybody was interested in whether Remix was going to be adding support for server components. And Ryan put together a couple of experiments and found out that server components were nowhere near ready. And we could do better than server components could as of, you know, the time that he wrote the blog posts, like, two years ago. So, Hydrogen was working with server components. And I put us in touch with the Hydrogen team—I think it was me—to, like, talk with the Hydrogen team about, like, "Hey, how about instead of spending all this time building your own framework, you just build on top of Remix then you can, you know, make your Shopify starter projects just, like, a really thin layer on top of Remix and people will love it? And this is very important to us because we need to get users, especially really big and high profile users, so people will take us seriously." And so, we have this meeting. They fly a bunch of their people out to Salt Lake. They're asking us questions. We're asking them questions and saying, "Hey, listen, this is why server components are just not going to work out for you." Well, apparently, they didn't listen to us. It felt like they were just like, "No, we're highly invested in this. We've already sunk all this cost into this, but we're going to keep going." And they did end up shipping Hydrogen version 1 on top of server components, which I just thought was a big mistake. And it wasn't too long after that they came back and said, "Hey, we're kind of interested in having you guys join Shopify." So, right after Remix Conf, I go up into Michael's room at the hotel with Ryan. And they say, "Hey, listen, Kent, we're talking with Shopify about selling Remix and joining Shopify," and kind of bounced back and forth on whether we wanted to do it. All of us were just not sure. Because when I joined Remix, I was thinking, okay, we're going to build something, and it's going to be huge. This is going to be bigger than Vercel, like multibillion-dollar company. So, I really kind of struggled with thinking, hey, we're selling out. Like, we're just getting started here. So, Ryan and I ended up at RenderATL in Atlanta at that conference. We were both speaking there. And Ryan didn't fill out the right form. So, he actually didn't have a hotel room [laughs], and so he ended up staying in my room. I intentionally always get a double bedroom just in case somebody needs to stay with me because somebody did that for me once, and I just...it was really nice of them. So, I've always done that since. And so, I said, "Yeah, Ryan, you can stay with me." And so, we spent just a ton of time together. And this was all while we were trying to decide what to do with Shopify. And we had a lot of conversations about, like, what do we want for Remix in the future? And it was there that I realized, oh if I want to take this to, like, multi-billion dollar valuation, I've got to do things that I am not at all interested in doing. Like, you've got to build a business that is worth that much money and do business-related things. On top of all of that, to get any money out of it...because I just had a percentage of the company, not actually any money. There was no stock. So, the only way you can get money out of a situation like that is if you have a liquidation event like an IPO, which sounds, like, awful—I [laughs] would hate to go through an IP0—or you have to be bought. And if you're worth $2 billion, or 3, or whatever, who can buy you? There's almost nobody who can buy you at that valuation. Do you really want to outprice anybody that could possibly buy you? And then, on top of that, to get there, that's, like, a decade worth of your life of working really superduper hard to get to that point, and there's no guarantee. Ryan would always say a bird in the hand is worth two in the bush. He was saying Shopify is a bird in the hand, and we do not know what the future holds. And so, we were all finally convinced that, yeah, we want to sell, and so we decided, yeah, let's sell. And as the sale date grew closer, I was getting excited because I was like, oh, I can be back on the TC39 because Shopify is, like, I don't know if they're actually sending delegates to the TC39, but I'm sure that they would be interested if I ask them to, like, "Hey, let's be involved in the evolution of JavaScript." And I know they're on the Web Working Group. Like, they're on a bunch of different committees and stuff. And I just thought it'd be really cool to get involved in the web platform again. And then, on top of that, I just thought, you know what? I'll just spend all my time teaching Shopify developers how to use Remix. That sounds like a lot of fun. As things drew closer, I got more and more uneasy about that. And I thought, you know, I could probably do just as well for myself by going full-time teacher again. I've done this thing before. I just really like being a teacher and, like, having total control over everything that I do. And if I work at Shopify, they're going to tell me, "Hey, you need to, like, do this, and that, and the other." And I don't know if I want to go back to that. And so, I decided, this is awesome. Super, super good job, folks. I think I've done everything for you that you need me to do. I'm going to bail out. And so, yeah, Shopify wasn't super jazzed about that. But the deal went through anyway. And that's how I ended my time at Shopify. WILL: I love it. It's lining up perfectly because you say you left Shopify to go back doing more teaching. And then you released another course; that's Epic Web, correct? KENT: Right. That was the reason I left Shopify or I didn't join up with Shopify is because I wanted to work on Epic Web. In this 2010s blog post, one of the last things that I mention...toward the bottom, there's a section, KCD EDU, which is basically, like, I wanted to help someone go from zero to my level as an engineer in a single place where I teach just all of the things that I can teach to get somebody there. And so I wanted to call it KCD EDU, but I guess you have to be an accredited university to get that domain or something. But that was the idea. Erin Fox, back in 2020 she said, "I'm expecting you to announce your online Kent C. Dodds engineering bootcamp." And I replied, "I'm planning on doing this, no joke." So, I've been wanting to do this for a really long time. And so, leaving Remix was like, yeah, this is what I'm going to go do. I'm going to go build KCD EDU. And I was talking with Ryan at some point about, like, what I was planning on doing in the future. And something he said or something I said in that conversation made me realize, oh, shoot, I want to build Epic Web Dev. So, I've got Epic React. I don't want Epic Remix. I want people to, like, be web developers. Remix is just, like, an implementation detail. And so, I went and I was relieved to find that the domain was still available: epicweb.dev, and so I bought that. And so, I was always planning on, like, even while I was at Remix, eventually, I would leave Remix and go build Epic Web Dev. So, that's what I did. Starting in August, I decided, okay, how about this: I will build a legit real-world web application, and then I will use that to teach people how to build legit real-world web applications from start to finish. If it's included as, like, knowledge you would need to build this web app, then that's knowledge you need to be able to build a full-stack application. That was the idea. So, I started live streaming in, like, August or September, and I would live stream almost everyday development of this web app. So, people can go and watch those on my YouTube channel. I would livestream for, like, sometimes six hours at a time with breaks every 45 minutes. So, I'd just put it on a break slide, go for a quick walk, or take a drink, whatever, and then I would come back. And I would just, like, so much development and live streaming for a long time. Once I got, like, in a pretty good place with that, the app I was building was called Rocket Rental. It's like Airbnb for rocket ships. So, you could rent, like, your own rocket ship to other people to fly. So, it had to be, like, realistic enough that, like, you could relate it to whatever you were building but not realistic enough that people would actually think it was a real product [laughs]. I worked with Egghead again. They actually have a sister company now called Skill Recordings that's responsible for these types of products. And so, I was working with Skill Recordings on, like, they would get me designs. And then I would, like, work with other people to help implement some of those designs. And then, I started working on turning this stuff into workshops. And with Epic React, we have this workshop app that you run locally so that you can work in your own editor, in your own environment, and with your own editor plugins and all that stuff. I want you to practice the way that you're going to actually exercise that practice when you're done––when you're working at work. And so we have this workshop app with Epic React. Well, that was built with Create React app, very limited on what you could do. And so, I started working on a new workshop app that I just called KCD Shop, that was built with Remix. And so, now we've got a bunch of server-side stuff we can do. And this server side is running on your machine. And so, so much stuff that I can do with this thing. One of the big challenges with Epic React was that the video you watch is on epicreact.dev, but the exercises you run are on localhost. And so, you have to keep those things in sync. You'd see, okay, I'm in exercise one on the videos. Let me go find exercise one in the app and then find the file exercise one. So, you've got, like, three different things you've got to keep in sync. And so, with the workshop app for Epic Web, I said, how about we make it so that we can embed the video into the app? And so, you just have localhost running, and you see the video right above the instructions for the exercise. And so, you watch the video that kind of introduces the problem that you're going to be doing, and then you read the instructions. And then we can also make it so that we have links you can click or buttons you can click in the app that will open your editor exactly where you're supposed to go. So you don't have to keep anything in sync. You go to the app, and you watch the video. You read the instructions. You click this button. It opens your editor. And so, that's exactly what I did. And it's an amazing experience. It is phenomenal, not just for the workshop learners but for me, as a workshop developer, like, creating the workshop––it's just been phenomenal. Because, like, we also have this diff view where you can see the difference between your work in progress and the solution. So, if you get stuck, then it's very easy to see where you went wrong. It also means that we can build even very large applications as part of our workshop and our exercise where there are dozens or hundreds of files. And you don't have to worry about finding them because it'll tell you exactly which ones you need to be working in, so all sorts of really, really cool things. So, this workshop app––actually, took a lot of time and effort to build. But now that it's done, like, people are going through it now, and they're just loving it. So, I built the workshop app, I put the first workshop of Rocket Rental into this workshop app, and I delivered it. And I found out very quickly that a full application with all the bells and whistles you'd expect, like, tons of different routes and stuff, was just too much. Even with the workshop app, it was just really pretty difficult for people to gain enough context around what they were building to be effective. So, I was concerned about that. But then, around the same time, I started realizing that I had a marketing problem. And that is that with Testing JavaScript, people know that they're customers because they're like, I'm a JavaScript developer, and I know how to test––boom. I'm a Testing JavaScript customer. With Epic React, I join this company; they're using React; I need to know React, boom. I'm a customer of Epic React. But with something like Epic Web, it's just so broad that, like, yeah, I am a web developer. I just don't know if I'm a customer to Epic Web. Like, is Epic Web for only really advanced people, or is it only for really beginner people? Or is it only for people who are using this set of tools or... Like, it's just a very difficult thing to, like, identify with. And so I wanted to de-emphasize the fact that we used Remix because the fact is that you can walk away from this material and work in a Next.js app or a SvelteKit app and still use so much of the knowledge that you gained in that environment. So, I didn't want to focus on the fact that we're using any particular set of tools because the tools themselves I select them, not only because I think that they are really great tools but also because the knowledge you gain from these tools is very transferable. And I'm going to teach it in a way that's very transferable. That was the plan. But I still had this issue, like, I need people to be able to identify themselves as customers of this thing. So, what I decided to do through some, like, hints and inspiration from other people was how about I turn Rocket Rental into a much simpler app and make that a project starter? And while I was at Remix, actually, I directed the creation of this feature called Remix Stacks. It's basically the CLI allows you to create a Remix app based on a template. I said I can make a Remix Stack out of this, and I called it the Epic Stack. And so, just took all of the concepts that came from Rocket Rental; applied it to a much simpler app. It's just a note-taking app, but it has, like, all of the features that you would need to build in a typical application. So, it's got a database. It's got deployment, GitHub integration. So, you have GitHub Actions to run tests and stuff. It has the tests. It has authentication already implemented, and even two-factor auth, and third-party auth, and file upload, and, like, just tons and tons of stuff built in. And so, people can start a new project and ship that and have a lot of success, like, skip all the basic stuff. So, I presented that at Remix Conf. I wasn't working at Remix anymore, but they asked me to run Remix Conf again, so I did. And I told them, "If I'm running it this year, I'm going to select myself to speak." And I spoke and introduced the Epic Stack there. And then that was when I started to create the workshops based on the Epic Stack. And so, now it was no longer we're going to have workshops to build Rocket Rental; it was we're going to have workshops to build the Epic Stack, with the idea being that if you build the thing, you are able to use it better, like, still following the same pattern I did with Testing JavaScript where we build a framework first. Like, before you start using Jest, we're building Jest and same with Testing Library. We do the same thing with React. Before we bring in React, I teach you how to create DOM nodes yourself and render those to the page and all of that. And so, here with Epic Web, I'm going to teach you how to build the framework that you can use to build applications. So, that is what Epic Web is, it's effectively we're building the Epic Stack. In the process, you learn all about really basic things, like, how do you get styles onto the page all the way to really complex things like, how do you validate a user's email? Or how do you implement two-factor auth? Or how do you create a test database? So, you don't have to mock out the database, but you can still run your test in isolation. Around this time was when my wife and I were trying to become pregnant. And we got the news that we were expecting, and we were super excited. And so, I'm thinking, okay, I've got to ship this thing before the baby comes. Because who knows what happens after this baby comes? So, I am talking with Skill Recordings. I'm saying, "We've got to get this done by October." I think it was May. And so, I was thinking like, okay, I've probably got, like, maybe eight days worth of workshops here. And so, kind of outlined all of the workshops. Like, I know what needs to be included. I know what the end looks like because I've got the Epic Stack. The end is the Epic Stack. The beginning is, like, a brand new create Remix app creation right there. So, I know what the start and the end looks like. I kind of can figure out how much time I need to teach all of that. And I said, "Let's do eight days." And so, we got that scheduled and started selling tickets. And we sold out 30 tickets in just a couple of days, and that's what we originally planned for. I'm like, well, gosh, I can handle 80 people in a workshop. I've done that before, but that's about as far as I go. I don't really like going that much. In fact, online, especially, I only like to go up to, like, 40. But we said, "Hey, let's knock this out of the park." So, we doubled it, and we sold another 30 seats. And so, it was sold out before even the early bird sale was over. So, that was pretty encouraging. The problem was that I hadn't actually developed this material. I'd already given one workshop about testing with Rocket Rental, and I'd given one workshop about the fundamentals with Rocket Rental. But I hadn't done anything of the authentication or, the forms, or data modeling. Also, like, Epic Notes app is different from Rocket Rental. So, I got to rebuild those workshops. Like, the first workshop was going to start in, like, two weeks, maybe three weeks. And so, I'm working on these workshops. And I'm like, I've finished the first workshop, which was going to be a two-day workshop, and so I get that done. And so, that next week, I'm getting close to finished on the forms workshop, and then I start the workshops. And that was when I started to realize, oh, shoot, I am in huge trouble because I have to not only deliver two workshops a week, so that's two days a week that I'm not able to work on the workshops, really. And then also develop the material as I go, which I don't normally do this at all because I just don't like stressing myself out so much. But, like, I'd had this timeline put together, and I'm like, I need to ship this by October. For about five weeks, I worked 80 to 100 hours a week, maybe more, in a row to get those workshops created [laughs]. And I do not recommend this, and I will never do it again. I can tell you this now. I didn't tell anybody at the time because I was worried that people would think, well, geez, is that the type of product you create, like, you're just rushing through this stuff? But I can tell you this safely now because the results speak for themselves. Like, these people loved this stuff. They ate it up. It was so good. I won't do this again. It's not something that I typically do. But it worked. And, like, I put in a crazy amount of work to make this work. People loved it. And yeah, I'm really, really happy with that. The next step, though, so it was eight days' worth of workshops in four weeks. And I realized, as I almost always realize when I'm presenting workshops, that, like, oh my gosh, I have way more material than I have time for. So, by

The Clever Investor Show
#56: Real Estate Game-Changing Strategies with Robert Wensley

The Clever Investor Show

Play Episode Listen Later Nov 29, 2023 67:40


In today's episode, we have Bryant, Forrest, and Robert Wensley from Investor Lift. They delve into the current real estate landscape, wholesale deals, and market dynamics. Join Cody and his guests as they explore topics such as The Mobile Marketing Machine, nationwide wholesaling, and how Investor Lift is staying ahead in this evolving industry. Tune in for valuable insights into market conditions, strategic market selection, and our latest features for investor profiles and wholesaling partnerships. They will also address challenges faced by newcomers and highlight the resources provided by platforms like sendustheDeals.com and dodealswithme.com. It's an episode packed with industry wisdom and a touch of humor! Robert Wensley has worked in the real estate industry for the past five years. As the Chief Executive Officer of InvestorLift, a company disrupting the real estate investment industry with data and technology, he has facilitated $1 billion in off-market real estate deals through the software InvestorLift. Additionally, he played a pivotal role in growing multiple wholesaling companies to over $1 million per month in assignment fees earned. Wensley is a Harvard graduate in Economics with minors in Finance and Government, and he holds various certifications, including Firebase, iOS App Development, and Python/Django Full Stack Web Developer.

Smart Venture Podcast
#147 Afore Capital's Managing Partner Gaurav Jain

Smart Venture Podcast

Play Episode Listen Later Nov 20, 2023 50:08


Gaurav Jain is the Co-founder of Afore Capital, one of the top venture funds ($300M AUM) dedicated to pre-seed investments. Some of the investments Gaurav and/or Afore have been involved in include Modern Health (leading mental health platform), Hightouch (leader in data infra), Cruise Automation (acq by GM for $1B+) and Firebase (acq by Google). You can find Gaurav at https://twitter.com/gjain You can learn more about:  How to identify talents in the early stage  How to build a top pre-seed VC fund  How to add value to top founders  ===================== YouTube: @GraceGongCEO Newsletter: @SmartVenture LinkedIn: @GraceGong TikTok: @GraceGongCEO IG: @GraceGongCEO Twitter: @GraceGongGG =====================   Join the SVP fam with your host Grace Gong. In each episode, we are going to have conversations with some of the top investors, superstar founders, as well as well-known tech executives in silicon valley. We will have a coffee chat with them to learn their ways of thinking and actionable tips on how to build or invest in a successful company.

Syntax - Tasty Web Development Treats
Potluck × Is TypeScript Fancy Duct Tape × Back Pain × Cloud Service Rate Limits

Syntax - Tasty Web Development Treats

Play Episode Listen Later Aug 9, 2023 70:36


In this potluck episode of Syntax, Wes and Scott answer your questions about TypeScript just being fancy duct tape, dealing with back pain while coding, rate limits on cloud services, what to use for email provider, is Firebase a legit platform, and more! Show Notes 00:11 Welcome 03:11 The Sunday scaries 06:03 Is TypeSctipt just a bunch of fancy Duck Tape? Is TypeScript saving us? 12:29 How do you go years into programming without back pain? Hasty Treat - Stretching For Developers with Scott — Syntax Podcast 293 23:51 Why don't cloud services provide an option to shut off services when a spending limit is reached? DigitalOcean | Cloud Hosting for Builders Vercel: Develop. Preview. Ship. For the best frontend teams 28:41 How do you choose a CSS library for any project? The most advanced responsive front-end framework in the world. | Foundation 960 Grid System 38:26 What's happening to Level Up Tuts? Level Up Tutorials - Learn modern web development Wheels - Skateboard Wheels - 60mm Cali Roll - Shark Wheel 43:43 Not a sponsored Yeti spot 45:16 What do you do for email hosting? Google Workspace TechSoup Canada Proton Mail: Get a private, secure, and encrypted email account Outlook Microsoft 365 Plans Scheduling Software Everyone Will Love · SavvyCal Synology Photos 50:34 Is Firebase ok to run an app long term with? Firebase 58:57 Am I wrong to not do productive work intensely? 01:34 SIIIIICK ××× PIIIICKS ××× ××× SIIIIICK ××× PIIIICKS ××× Scott: MagSafe Charger, Anker 3-in-1 Cube with MagSafe Wes: 6amLifestyle Headphone Hanger Stand Under Desk Shameless Plugs Scott: Sentry Wes: Wes Bos Tutorials Tweet us your tasty treats Scott's Instagram LevelUpTutorials Instagram Wes' Instagram Wes' Twitter Wes' Facebook Scott's Twitter Make sure to include @SyntaxFM in your tweets Wes Bos on Bluesky Scott on Bluesky Syntax on Bluesky