POPULARITY
Categories
Succinct introduces SP1 Hypercube for real-time Ethereum proving. Lido releases its V3 whitepaper. And Untron V2 goes live on the Superchain. Read more: https://ethdaily.io/706 Disclaimer: Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.
May 18th, 2025 - 2 Timothy 4:1-8 - The Truth Changes Lives (feat. Chris Riggs)1) Teach the truth (V1&2)2) We have a tendency to walk away from the truth (V3&4)3) We will be rewarded for enduring in the truth (5-8)
La forteresse de Mimoyecques, située dans le Pas-de-Calais, fut construite par l'Allemagne nazie durant la Seconde Guerre mondiale dans le but de mener une attaque massive contre Londres. Ce site souterrain, dissimulé dans une colline près de la Manche, devait abriter une arme aussi redoutable que révolutionnaire : le canon V3. Contrairement aux V1 (missiles volants) et V2 (premiers missiles balistiques), le V3 était un supercanon conçu pour frapper la capitale britannique à très longue distance, sans possibilité de riposte.L'objectif stratégique de la forteresse était clair : infliger à Londres des bombardements constants, à raison de plusieurs centaines d'obus par jour, dans l'espoir de briser le moral de la population et de forcer le Royaume-Uni à capituler. Pour cela, les ingénieurs allemands développèrent un système complexe de canons à chambres multiples. Le principe consistait à utiliser une série de charges explosives réparties le long du tube du canon, qui s'enclenchaient en séquence pour accélérer progressivement un projectile de 140 kg. La portée estimée atteignait 165 kilomètres — suffisante pour toucher le cœur de Londres depuis Mimoyecques.Le site fut choisi pour sa proximité avec la côte anglaise et pour ses caractéristiques géologiques favorables : le sous-sol crayeux permettait le creusement de galeries profondes, à l'abri des bombardements. Plusieurs galeries inclinées furent creusées pour accueillir les tubes du V3, avec un réseau logistique impressionnant de bunkers, de casemates et de voies ferrées souterraines.Mais le projet prit du retard en raison de difficultés techniques. Les premiers tests révélèrent des problèmes de stabilité et de précision. Surtout, les Alliés furent rapidement alertés du danger que représentait Mimoyecques grâce à des photos aériennes et des informations fournies par la Résistance française. La Royal Air Force lança plusieurs bombardements en 1944, dont l'un particulièrement efficace le 6 juillet, utilisant les bombes "Tallboy", capables de pénétrer profondément dans le sol. Une frappe frappa directement un puits de lancement et tua de nombreux ouvriers allemands, compromettant gravement le projet.L'invasion de la Normandie, en juin 1944, scella définitivement le sort de Mimoyecques. Avant même d'être opérationnel, le site fut abandonné. Le V3 ne tirera jamais sur Londres.En résumé, la forteresse de Mimoyecques a menacé Londres car elle représentait une base de lancement pour une arme conçue spécifiquement pour bombarder la ville de manière continue. Elle incarne une des tentatives les plus ambitieuses de la guerre psychologique et technologique menée par le régime nazi. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
In this special two-year anniversary episode of Tabletop Tommies, Jonny and Phil return to their roots by revisiting the Welsh Nationals once more. Join them as they delve into the current state of the meta, particularly the dominance of armored warfare in V3 of the game. With five intense rounds behind them, they share insights from their games and what this means for future competitive play. The duo reflects on the effectiveness of different strategies, highlighting the shift towards tank-centric tactics and armored transports. Are they truly the key to victory, or is there room for other play styles? Jonny and Phil discuss their personal experiences, including compelling battles and tactical decisions, offering listeners a detailed analysis of the competitive scene. Tune in for a comprehensive breakdown of nations represented, player strategies, and what the results from Welsh Nationals suggest about the evolving landscape of the game. Whether you're a seasoned player or new to the competitive scene, this episode is packed with valuable insights and light-hearted banter. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
In this episode, you'll hear about the latest developments in tailoring cancer treatments to individual patients using Precision Oncology. Two thought leaders, Simone Ndujiuba, a Clinical Oncology Pharmacist at Prime Therapeutics, and Karan Cushman, Head of Brand Experience and host of The Precision Medicine Podcast for Trapelo Health, discuss real-world research that is paving the way for Prime and our partners to help providers reduce turnaround times so patients can start treatment as soon as possible. Join your host Maryam Tabatabai as they dig into this evolving topic of precision oncology. www.primetherapeuitics.com ChaptersDefining precision medicine (08:50)Evaluating real-world operational process of biomarker testing (14:36)Turnaround times are crucial (17:40)A patients view into the importance of time (24:39)Technology and process aid in time and process (29:30)Helping bridge knowledge gaps for providers and payers (33:55) The focus is on Precision Oncology right now (37:00)Precision medicine in other disease categories (40:09)Future of precision oncology is bright (42:07) References Singh, B.P., et al. (2019). Molecular profiling (MP) for malignancies: Knowledge gaps and variable practice patterns among United States oncologists (Onc). American Society of Clinical Oncology. https://meetings. asco.org/abstracts-presentations/173392 Evangelist, M.C., et al. (2023). Contemporary biomarker testing rates in both early and advanced NSCLC: Results from the MYLUNG pragmatic study. Journal of Clinical Oncology, 41(Supplement 16). https://doi.org/10.1200/JCO.2023.41.16_suppl.9109. Ossowski, S., et al. (2022). Improving time to molecular testing results in patients with newly diagnosed, metastatic non-small cell lung cancer. Journal of Clinical Oncology, 18(11). https://doi.org/10.1200/OP.22.00260 Naithani N, Atal AT, Tilak TVSVGK, et al. Precision medicine: Uses and challenges. Med J Armed Forces India. 2021 Jul;77(3):258-265. doi: 10.1016/j.mjafi.2021.06.020. Jørgensen JT. Twenty Years with Personalized Medicine: Past, Present, and Future of Individualized Pharmacotherapy. Oncologist. 2019 Jul;24(7):e432-e440. doi: 10.1634/theoncologist.2019-0054. MedlinePlus. What is genetic testing? Retrieved on April 21, 2025 from https://medlineplus.gov/genetics/understanding/testing/genetictesting/. MedlinePlus. What is pharmacogenetic testing? Retrieved on April 21, 2025 from https://medlineplus.gov/lab-tests/pharmacogenetic-tests/#:~:text=Pharmacogenetics%20(also%20called%20pharmacogenomics)%20is,your%20height%20and%20eye%20color. Riely GJ, Wood DE, Aisner DL, et al. National Cancer Comprehensive Network (NCCN) clinical practice guidelines: non-small cell lung cancer, V3.2005. Retrieved April 21, 2025 from https://www.nccn.org/professionals/physician_gls/pdf/nscl.pdf. Benson AB, Venook AP, Adam M, et al. National Cancer Comprehensive Network (NCCN) clinical practice guidelines: colon cancer, V3.2025. Retrieved April 21, 2025 from https://www.nccn.org/professionals/physician_gls/pdf/colon.pdf. Rosenberg PS, Miranda-Filho A. Cancer Incidence Trends in Successive Social Generations in the US. JAMA Netw Open. 2024 Jun 3;7(6):e2415731. doi: 10.1001/jamanetworkopen.2024.15731. PMID: 38857048; PMCID: PMC11165384. Smeltzer MP, Wynes MW, Lantuejoul S, et al. The International Association for the Study of Lung Cancer Global Survey on Molecular Testing in Lung Cancer. J Thorac Oncol. 2020 Sep;15(9):1434-1448. doi: 10.1016/j.jtho.2020.05.002.The views and opinions expressed by the guest featured on this podcast are their own and do not necessarily reflect the official policy or position of Prime Therapeutics LLC, its hosts, or its affiliates. The guest's appearance on this podcast does not imply an endorsement of their views, products, or services by Prime Therapeutics LLC. All content provided is for informational purposes only and should not be construed as professional advice.
We thought it would be an excellent opportunity to look back at 2024—our favorite shop upgrades, biggest projects, and lessons learned—and then peek ahead at what's in store for 2025. Listen Waiting for upload, please check back in a few minutes Watch on YouTube Waiting for upload, please check back in a few minutes Hosted by Brian Benham Portfolio: https://www.benhamdesignconcepts.com/ Brian Benham on BlueSky: https://bsky.app/profile/benhamdesignconcepts.com YouTube: https://www.youtube.com/channel/UCXO8f1IIliMKKlu5PgSpodQ Greg Porter https://skyscraperguitars.com/ Greg On Instagram: https://www.instagram.com/gregsgaragekc/ YouTube: https://www.youtube.com/c/SkyscraperGuitars YouTube: https://www.youtube.com/c/GregsGarage Show Notes Reflecting on 2024 and Looking Ahead to 2025: Shop Upgrades, Projects, and Goals Shop Upgrades That Made a Difference in 2024 Organization & Tool Storage One of the biggest game-changers for both of us was improving shop organization. A mechanic once said, "Don't put it down—put it away." That mindset has helped keep tools in their proper places, eliminating the frustration of searching for misplaced items. - Brian's Upgrade: A high-quality toolbox (not just a basic Harbor Freight or Home Depot option) made a massive difference. A well-organized toolbox reflects a well-organized workflow. - Greg's Upgrade: Adding Husky cabinets under his table saw extension improved storage and accessibility. The Incra Miter Gauge Brian recommended the Incra Miter Gauge, and it quickly became one of Greg's most-used tools in 2024. - Why It's Great: - Eliminates play in the miter slot for precise, repeatable cuts. - Features an integrated stop block system (similar to high-end aftermarket options). - Fine-adjustment capabilities make it perfect for exact angles. Greg admits he was skeptical at first, preferring crosscut sleds, but after a year of use, he hasn't touched his sled since. The Black Box Vacuum Pump for CNC Workholding Greg's Black Box vacuum pump transformed his CNC workflow. - The Problem: Workholding on a CNC can be a nightmare—tabs, screws, and clamps often lead to failed cuts. - The Solution: The vacuum pump holds sheets firmly in place, reducing material waste and improving efficiency. - Success rate went from ~75% to 98%. - Added automation: The CNC now turns the pump on/off automatically via relay control. The Track Saw Revolution Greg was a longtime skeptic of track saws, preferring a circular saw and straightedge. But after breaking down hundreds of sheets of MDF, he caved and bought a Ridgid cordless track saw. - Why It Won Him Over: - Faster, more accurate breakdown of sheet goods. - Paired with an MFT-style workbench (from Fred Sexton of Bristol Artisan Co.) and Bora Speed Horses, creating a portable, efficient cutting station. - No more wrestling full sheets—everything gets broken down outside before entering the shop. The Festool Debate Brian and Greg had a fun back-and-forth about Festool. - Pros: - Industry-leading dust collection (great for job sites and clean shops). - The Domino joiner is a game-changer for furniture makers. - Cons: - High price tag. - Some tools may not justify the cost for hobbyists or those who don't need ultra-portability. Packout Systems & Tool Storage Both Brian and Greg explored different modular storage systems (Milwaukee Packout, Klein, etc.). - Greg's Pick: Klein Tool Cases—expensive but rugged, with clear lids and customizable bins. - Brian's Experience: Packout systems are great for contractors but may be overkill for shop-only use. Harbor Freight's Improvement Greg noted that Harbor Freight's quality has significantly improved over the years. - Icon Tools Line: Their ratcheting wrenches and socket sets now rival mid-tier brands like Husky and Craftsman. - Toolboxes: No longer the flimsy junk of the past—now a solid budget option. Notable Projects from 2024 Brian's Big Builds - Las Vegas Casino Project: A massive, high-profile installation that pushed his team's limits. - Red Rocks Amphitheater Work: A challenging but rewarding project (technically late 2023, but close enough!). Lesson Learned: Installation is just as critical as fabrication. Even the best-built pieces can fail if not installed correctly. Greg's Product Expansion When a competitor in the guitar-making jigs and tools space went out of business, Greg saw an opportunity. - Redesigned & Released Over 20 New Products, including: - Side benders (for shaping guitar sides). - Outside molds & cutaway forms (previously unavailable). - Mortise & tenon jigs (V3 design, improved from older versions). - Backward Compatibility: Ensured his new tools worked with older systems, earning gratitude from customers. Looking Ahead to 2025 Greg's Goals: Build His First Commissioned Guitar – Learning from luthier Robbie O'Brien to refine construction techniques. Expand Skyscraper Guitars – Transition from a one-man operation to a scalable business with employees. Finish the Porsche 356 Project – After a busy 2024, he's eager to get back to this passion build. Brian's Plans: - Grow His YouTube Presence – Shifting focus to more educational content for aspiring woodworkers. - Streamline Production – Finding ways to balance custom work with repeatable, profitable projects. Final Thoughts 2024 was a year of tool upgrades, shop efficiency, and big projects. For 2025, the focus shifts to growth, refinement, and new challenges. What were your biggest shop upgrades or projects in 2024? What are you looking forward to in 2025? Let us know in the comments!
Send us a textIn this episode of Sidecar Sync, Amith and Mallory dive deep into two major AI model releases—Google's Gemini 2.5 Pro and DeepSeek's V3—and explore how they're reshaping the landscape of artificial intelligence. They discuss the technical breakthroughs, from million-token context windows to open-source innovations, and what it all means for association leaders. This is a jam-packed episode with strategic insights for the forward-thinking association exec.
Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)
Plus Ghiblified Pics Flood The Internet Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.usAI Might Lowkey Be Sneaking Politics into Your Feed—Here's What's UpTurns out some AI models (like China's DeepSeek) could secretly push political vibes into their results. People are stressing this hidden bias might subtly shape your views, sparking debates on keeping AI fair. The big question: can we keep AI chill, neutral, and hype-free?Stanford's NNetNav: AI Agent That Masters Websites Like a ProStanford's NNetNav is an open-source AI agent that learns to navigate websites by exploring them, much like how kids learn through curiosity. This self-taught approach enables NNetNav to perform online tasks as effectively—or even better—than models like GPT-4, all without needing human-labeled data. It's a big step toward AI that can adapt and tackle real-world web challenges on its own. AI 'Ghiblified' Pics Are Blowing Up—Here's the Heartwarming Story Behind the Trend AI-generated images mimicking Studio Ghibli's iconic style are taking over social media, thanks to Seattle engineer Grant Slatton. His AI-crafted family portrait sparked a viral wave, with users sharing their own "Ghiblified" photos. While the trend's spreading joy, it's also igniting debates on the ethics of using AI to replicate distinct artistic styles. AI's Reasoning Skills: ChatGPT vs. DeepSeek—Who's Winning?AI models like ChatGPT and China's DeepSeek are stepping up their reasoning game, aiming to handle complex tasks better. DeepSeek's latest upgrade, V3-0324, shows improved reasoning and coding skills, intensifying competition with U.S. tech giants. However, studies reveal that Chain-of-Thought reasoning in AI isn't always reliable, with models sometimes providing logically contradictory answers. AI Medical Scribes Ease Doctor Burnout, But Wallets Stay Light AI-powered medical scribes are stepping in to transcribe patient visits, helping doctors chill out and feel less burned out. Trials at places like Mass General Brigham saw a 40% drop in doctor burnout. But when it comes to saving cash or boosting efficiency, these AI helpers aren't quite delivering yet.AI Recipe Generator Promises Restaurant-Quality Meals at Home—But Does It Deliver?SideChef's RecipeGen AI claims to transform any food photo into a step-by-step recipe, aiming to help users recreate restaurant dishes at home. However, when tested with a brunch photo from Malibu Farm, the AI's output missed key ingredients like strawberry butter and ricotta, and added unrelated items such as bell peppers and onions. Attempts with other dishes, like ramen, resulted in errors, suggesting the tool struggles with accuracy and reliability. While the concept is intriguing, the current execution leaves much to be desired.
In this episode of Solana Weekly, host Thomas sits down with Gus Marquez from Parcl to explore how they're revolutionizing real estate investment on the Solana blockchain.Episode Highlights:Discover how Parcl creates synthetic exposure to real estate returns through data-driven indexes without tokenizing physical propertiesLearn about Parcl Labs, which indexes every home in the U.S. and provides institutional-grade dataExplore the inefficiencies in traditional real estate markets and how Parcl addresses themUnderstand the advantages: 18 basis points transaction costs vs. 2-5% for physical real estate and leverage up to 50xHear why Solana was the perfect blockchain for Parcl's visionAbout the Guest:Gus Marquez is part of the team at Parcl, working to make real estate investment more accessible and efficient by bringing it on-chain. Parcl allows users to long or short specific real estate markets with none of the maintenance headaches of physical ownership.Key Moments:The founders conceived Parcl during COVID while observing migration trends, inspired by the lack of tools to short real estate markets. After several iterations, the current V3 platform offers sophisticated risk management and daily price updates based on extensive data aggregation.Whether you're saving for a home while tracking market returns, hedging property value for retirement, or seeking investment diversification without property management headaches, Parcl offers a compelling solution for both retail and institutional investors.Visit parcl.co to learn more, and look for parcllabs.com launching soon with institutional-quality real estate reports.This episode is for informational purposes only and does not constitute financial or investment advice.More at solanaweekly.fun/episodes Get full access to The Dramas of Thomas Bahamas at thomasbahamas.substack.com/subscribe
Welcome to another exciting episode of Tabletop Tommies, where Jonny and Phil delve into the final installment (for now) of the Armies Of... series, focusing on the Netherlands. In this episode, our hosts explore the unique and quirky characteristics of the Dutch army in V3, comparing them to previous versions while discussing their potential in tabletop warfare. With expectations high, Jonny and Phil break down what makes the Netherlands stand out, from their artillery strategies to their special rules, revealing how these elements combine to create a more flavourful force. The discussion also covers the challenges and advantages of using the Dutch army, providing listeners with tactical insights that could redefine their gaming experience. Join the Tabletop Tommies as they uncover whether this minor nation can indeed hold its own on the battlefield or even punch above its weight. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
Artist David Zvonař joins LAB the Podcast to share a glimpse into his story and for a conversation on photography, beauty and his time in Brno shooting V3's Sehnsucht Symphony recording.Coming soon: Sehnsucht Film Documentary and Sehnsucht Photobook! Visit: DavidZvonar.comVisit: https://vuvivo.com/Support / Sponsor: https://vuvivo.com/supportSupport the show
Text your thoughts and questions!Do you ever look at your to-do list and feel overwhelmed by the never-ending list of things that require your attention? Do you struggle to visualize which items take priority so you just end up doing nothing? You're not alone. This is one of the most common reasons clients come to work with me. This week, episode 252 of the Positively LivingⓇ Podcast is about what to do when everything in life is urgent!In this episode of the Positively LivingⓇ Podcast, I share why prioritization is crucial for maintaining balance and achieving meaningful progress and give you actionable steps to take right now to transform your approach to getting things done.I cover the following topics:Psychological barriers that keep people in a loop of reactivity instead of strategic action.Common mistakes people make when trying to manage their tasks.Proactive prioritization techniques to consider, including one of my favorites. How to own your choices, no matter the outcome. It's time to take intentional, purposeful action. Start by decluttering your to-do list by strategically evaluating your tasks. Remember, when you don't make a choice, the choice is made for you. Prioritize intentionally and reclaim control of your time and energy.Thank you for listening! If you enjoyed this episode, take a screenshot of the episode to post in your stories and tag me! And don't forget to follow, rate, and review the podcast and tell me your key takeaways!Learn more about Positively LivingⓇ and Lisa at https://positivelyproductive.com/podcast/Could you use some support? The Quickstart Coaching session is a way to get to know your productivity path, fast! A speed-round strategy session is perfect for a quick win and to see what coaching can do, the Quickstart will encourage and inspire you to take intentional, effective action! Go to https://www.positivelyproductive.com/plpquick for a special listener discount!CONNECT WITH LISA ZAWROTNY:FacebookInstagramResourcesWork with Lisa! LINKS MENTIONED IN THIS EPISODE:(Find links to books/gear on the Positively Productive Resources Page.)Ep 53: How To Tell If I'm Codependent with Mallory JacksonEp 116: The Most Important Boundary for People PleasersEp 232: How to Prioritize Personal Time by Setting BoundariesEp 235: When You Must Say No for a Less Stressful LifeDance Song Playlist V1, V2, V3
Edición de La Ventana a las 16h del jueves 6 de marzo.Madrid acoge hasta el próximo domingo la 44ª edición de Arco, la feria internacional de arte contemporáneo, un evento clave para el sector que reúne a artistas de renombre de todo el mundo. Uno de los protagonistas de esta edición es el escultor catalán Jaume Plensa, quien presenta en el stand del diario 'El País' su obra 'Entre sueños V3.0'. Este conjunto de esculturas, compuesto por ocho cabezas de alabastro con los ojos cerrados, invitan a la reflexión sobre la inmigración y sus implicaciones en la sociedad contemporánea. Hablamos en 'La Ventana' con el artista. Felicitamos al pianista James Rhodes y le invitamos a responder el clásico test de preguntas.
Madrid acoge hasta el próximo domingo la 44ª edición de Arco, la feria internacional de arte contemporáneo, un evento clave para el sector que reúne a artistas de renombre de todo el mundo. Uno de los protagonistas de esta edición es el escultor catalán Jaume Plensa, quien presenta en el stand del diario 'El País' su obra 'Entre sueños V3.0'. Este conjunto de esculturas, compuesto por ocho cabezas de alabastro con los ojos cerrados, invitan a la reflexión sobre la inmigración y sus implicaciones en la sociedad contemporánea. Hablamos en 'La Ventana' con el artista.
We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we begin with a tease from Apple CEO Tim Cook. His message on X that "there's something in the air" has sparked speculation about new MacBook Air models featuring the M4 chip. These potential upgrades include a 25% boost in multi-core CPU performance, enhanced AI capabilities, and improved features like a 12MP Center Stage camera and Wi-Fi 6E support. Apple's shift to a more subtle announcement strategy marks a departure from their traditional product launch approach.We also delve into the world of AI economics with Chinese startup DeepSeek's claim of a theoretical 545% cost-profit margin for its AI models. While this figure is based on calculations involving their V3 and R1 inference systems, real-world factors significantly reduce actual revenue. DeepSeek's aggressive pricing strategy and low development costs have sparked debate within the tech community and impacted AI-related stocks.The episode's main focus is the discovery of vast "gold hydrogen" reserves beneath 30 U.S. states, as revealed by a groundbreaking USGS map. This natural hydrogen, formed through a process called serpentinization in geological formations known as rift-inversion orogens, could revolutionize clean energy production. The abundance and widespread distribution of these reserves may accelerate the transition to sustainable energy sources, potentially reshaping the global energy landscape and creating new economic opportunities in regions with significant deposits.From Perplexity's Discover Feed:https://www.perplexity.ai/page/apple-air-product-teased-QhTieZlcTwWodiMLzGzP3ghttps://www.perplexity.ai/page/deepseek-s-theoretical-545-mar-_vk4xxCjSt.tLxQJCoU2sghttps://www.perplexity.ai/page/massive-gold-hydrogen-reserves-kRgxDixrTJCI1W17S2zcbw**Introducing Perplexity Deep Research:**https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Gareth and Ted battle hangovers to chat about the Canon Powershot V1 and V3, alongside an aluminium vinyl player, Anbernic's new ROM delivery app, and upcoming tablet from Oppo and Pixel Penis. With Gareth Myles and Ted Salmon Join us on Mewe RSS Link: https://techaddicts.libsyn.com/rss iTunes | YouTube Music | Stitcher | Tunein | Spotify Amazon | Pocket Casts | Castbox | PodHubUK Feedback, Fallout and Contributions JTManuel Wow! I never thought I'd get a mention. Thank you guys. I've been listening to the both of you since forever (Ted from PSC, Gareth since Mobile Tech Addicts) and have yet to be disappointed. I too am like Ted. I'm currently an RGN working in a private care home here in Barrow-in-Furness and have been a tech enthusiast since I got my Atari 800XL. I then moved on to the NES (not SNES) then all iterations of Gameboy. And you guys are all so relatable. I am always looking forward to both PSC and Tech Addicts for my daily walk to work. Keep up the great work and cheers from the North West! @CheapRetroGaming Thanks so much for sharing this interview, I've only watched a few videos from Slopes Game Room but I've enjoyed what I've seen thus far. For the podcast/interview here, I liked the different stories of what Daniel had gone through, such as almost getting scammed by the other channel, but thankfully avoiding that. I had also never heard of his epic Amico video that he had produced either. I hope to check it out later! Years ago I was really interested in that system because I liked the idea of the unique controller and the family friendly games, but of course, I have no interest in getting it now. It's sad how all that has panned out. Thanks again for your interview! News A hard mistake to make: Pixel Emergency SOS accidentally shares someone's nudes Canon PowerShot V1 (£785) looks like a Sony ZV-1 II (£799) beating compact vlogging camera Anbernic Update - Netflix of Retro games - Alt link This wild turntable plays vinyl without a tonearm and is a solid lump of aluminium - Also an AC/DC Pro-Ject Turntable Oppo Pad 4 Pro to debut with Snapdragon 8 Elite in April Banters: Knocking out a Quick Bant YouTube Premium Lite plan YouTube's Ghost Town Bargain Basement: Best UK deals and tech on sale we have spotted Ali Foote on UseNiy Rechargeable Lithium Batteries AA 8-Pack with Charging-Storage Box £14.99 from £26.99 Lexar NQ100 2.5” SATA III (6Gb/s) 240GB SSD - £13.99 Lenovo Tab Plus £189 from £289 UGREEN USB-C Charger 65W Fast Charger Plug - £39.09 SanDisk Ultra 1.5TB microSD Card £114.50 from £148.99 Crucial T500 2TB SSD PCIe Gen4 NVMe M.2 Internal Gaming SSD - £99.95 1More HQ31 ANC Headphones with 90 hour battery, £59.99 from £79.99/£69.99 Main Show URL: http://www.techaddicts.uk | PodHubUK Contact:: gareth@techaddicts.uk | @techaddictsuk Gareth - @garethmyles | Mastodon | Blusky | garethmyles.com | Gareth's Ko-Fi Ted - tedsalmon.com | Ted's PayPal | Mastodon | Ted's AmazonYouTube: Tech Addicts
Renowned Sculptor Timothy Schmalz joins LAB the Podcast for a conversation on beauty, faith and the powerful role of public art. The Portico, in downtown Tampa, is home to Timothy's moving “Homeless Jesus.” Join us for the conversation and if you are in Tampa, find your way to the Portico to encounter Timothy's work. Timothy Paul Schmalz Learn more about VU VI VO: https://vuvivo.com/Support the work of V3: https://vuvivo.com/supportSupport the show
Tesla is rolling out true V4 Supercharger cabinets, bringing 1,000V and up to 500kW charging, a massive leap over V3. These upgraded cabinets will enabling faster charging for high-voltage EVs like the Lucid Gravity, those Hyundai/Kia E-GMP cars, the Porsche Taycan, and even the Tesla Cybertruck. Let's talk about it.Shoutout to our sponsors for more information find their links below:- Fort Collins Kia: Visit focokia.com for full details. Disclaimer: *Delivery covers up to $1,000.Find us on all of these places:YouTube: https://www.youtube.com/outofspecpodcastApple Podcasts: https://podcasts.apple.com/us/podcast/out-of-spec-podcast/id1576636119Spotify: https://open.spotify.com/show/0tKIQfKL9oaHc1DLOTWvbdAmazon: https://music.amazon.com/podcasts/473692b9-05b9-41f9-9b38-9f86fbdabee7/OUT-OF-SPEC-PODCASTFor further inquiries please email podcast@outofspecstudios.com#tesla #supercharging #teslav4 Hosted on Acast. See acast.com/privacy for more information.
In this episode of Tabletop Tommies, Jonny and Phil delve into the fascinating world of partisan armies in Bolt Action. As they navigate through the unique rules and strategies that define these guerrilla forces, listeners will gain insights into the tactical evolution from V2 to V3. The conversation highlights the intriguing special rules of the partisans, such as infiltration and the dearly missed hidden bomb rule, while also discussing new additions like the home country rule. Through their analysis, Jonny and Phil offer potential strategies for adapting to changes in V3, especially when facing formidable opponents like the Finns. Join us for an engaging discussion on how to optimize your partisan army, learn about the historical context, and explore some creative army building ideas. From utilizing captured vehicles to expanding your force with cavalry, this episode provides essential tips for both new and seasoned Bolt Action players. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
xAI launched its new artificial intelligence model, Grok 3, which claims to exceed the performance of OpenAI's GPT-4o, DeepSeek's V3, and Google's Gemini. Grok 3 features over ten times the computational power of its predecessor, Grok 2. The name "Grok," based on a novel, signifies deep understanding. The model demonstrates capabilities in math reasoning, coding, and creativity by merging two games into a new format. Grok 3's training allows it to apply its learning across multiple tasks. xAI offers Grok 3 to Premium+ X subscribers for a monthly fee.Learn more on this news visit us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.
Il rapporto Draghi «è stato pubblicato in settembre» oggi «cinque mesi dopo» emerge che «ciò che è nel rapporto è ancora più urgente di quanto fosse cinque mesi fa». «C è una situazione molto difficile. Ora abbiamo i nostri valori. Abbiamo differenze di opinioni. Ma non è il momento di sottolineare queste le differenze ora, è il momento di sottolineare il fatto che dobbiamo lavorare insieme, sottolineare ciò che ci accomuna e ciò che credo ci accomuna sono i valori fondanti dell Unione Europea. E dobbiamo sperare e dobbiamo lavorare per questo» Lo ha detto Mario Draghi, ex presidente della Bce e autore del rapporto sulla Competitività Ue, intervenendo a una seduta del Parlamento europeo a Bruxelles. L'Unione europea deve attrezzarsi a far fronte a novità nei cambiamenti economici e politici globali. Ed «è sempre più chiaro che dobbiamo agire sempre di più come se fossimo un unico stato. La complessità della risposta politica che coinvolge ricerca, industria, commercio e finanza richiederà un livello di coordinamento senza precedenti tra tutti gli attori: governi e parlamenti nazionali, Commissione e Parlamento europeo» ha spiegato Draghi. Il commento è di Adriana Cerretelli, editorialista Sole 24 Ore Bruxelles.Raddoppio del traforo del Monte Bianco, "no" del ministro franceseParigi non vuole il raddoppio del traforo del Monte Bianco. Il ministro dei Trasporti transalpino Philippe Tabarot ha espresso di fatto un parere negativo in una lettera, datata 14 febbraio ma pubblicata ieri, in cui scrive che "La posizione della Francia , espressa regolarmente nel quadro della commissione intergovernativa del tunnel del Monte Bianco, non è cambiata". Nessuno sviluppo, dunque. Lo ribadisce il ministro dopo che il deputato dell'Alta Savoia all'Assemblea nazionale, Xavier Roseren, aveva chiesto di assumere una posizione definitiva sul tema. La decisione asseconda le volontà soprattutto della valle dell'Arve, da Chamonix-Mont-Blanc in giù, dove i tir sono un problema molto più sentito rispetto alla Valle d'Aosta e dove i livelli di traffico e inquinamento sono ritenuti da anni insostenibili. Interviene Francesco Turcato, presidente Confindustria Valle d'Aosta.Poste in Tim, filosofia industriale, faro su sinergie La prima mossa nel risiko delle tlc l'hanno fatta Poste e Cdp. I cda dei due gruppi nel week end hanno dato il via libera allo scambio azionario: Poste ha acquistato il 9,81% circa di Tim da Cassa Depositi e Prestiti e al contempo l'intera sua partecipazione in Nexi (pari al 3,78% circa) è passata a Cdp che così si rafforza nella 'pay tech' salendo al 18,25 per cento. Il corrispettivo per l'acquisto delle azioni di Tim sarà riconosciuto "in parte mediante i proventi derivanti dal trasferimento da Poste Italiane a Cassa Depositi e Prestiti della partecipazione in Nexi e in parte mediante cassa disponibile", appena sotto i 180 milioni di euro (valorizzando quindi Tim approssimativamente di 0,26/0,27 euro per azione). E' la prima tessera di un domino, alla quale Poste, che diventa il secondo azionista, guarda con un approccio industriale, che apre un ampio spazio di accordi commerciali e sinergie. Tra Tim e Poste, annuncia subito la società guidata da Matteo del Del Fante "è in fase avanzata la negoziazione per la fornitura di servizi per l'accesso di Postepay all'infrastruttura di rete mobile di Tim", l'ingresso nel capitale infatti "abilita l'evoluzione dei rapporti commerciali tra Tim e Poste Italiane" spiega il cda in una nota. Per la Cassa invece il focus è tutto puntato su Nexi, di cui è azionista dalla nascita: «Il Gruppo Cdp aumenta la propria quota in Nexi dall attuale 14,46% al 18,25% complessivo - spiega Cdp in una nota -, rafforzando così il sostegno alla strategia industriale di un azienda protagonista in Europa nell infrastruttura dei pagamenti digitali, che sin dalla sua nascita quattro anni fa ha avuto Cassa al suo fianco». Ne abbiamo parlato con Laura Serafini, Il Sole24Ore.Elon Musk presenta Grok-3 e rinviglorisce la rivalità con Sam AltmanElon Musk rilancia sull intelligenza artificiale e con la sua startup xAI ha presentato nelle scorse ore il modello Grok-3 aggiornato: una versione della tecnologia chatbot che secondo il miliardario è «la AI più intelligente della Terra». In una diretta streaming la società ha affermato che, in base a parametri matematici, scientifici e di codifica, Grok-3 «batte Google Gemini di Alphabet, il modello V3 di DeepSeek, Claude di Anthropic e GPT-4o di OpenAI». Grok-3 ha una potenza di calcolo «più che decuplicata» rispetto al suo predecessore e ha completato il pre-training all inizio di gennaio, ha detto Musk in una presentazione insieme a tre ingegneri di xAI. Approfondiamo con Enrico Pagliarini, Radio24.
In this episode of Tabletop Tommies, Jonny and Phil delve into the armies of Norway, analyzing their transition from V2 to V3. Known as one of the V2 favorites for their competitive edge, the hosts discuss whether the Norwegian forces maintain their prowess in V3. The episode explains the national characteristics that distinguished Norwegian armies in V2, such as "communications breakdown" and "fieldcraft," and compares these with the updated traits in V3. While certain characteristics have been retained or removed, new positive traits have also been introduced, shaping the updated landscape of competition. Jonny and Phil also explore the units available in the Norwegian armies, including infantry, engineers, and artillery options, emphasizing the scarcity of certain utilities like anti-tank weapons and flamethrowers. They discuss strategies for building effective lists with the available resources. Amidst tactical discussions, they express their thoughts on potential challenges when facing armored lists with the relatively limited Norwegian options. As they navigate through the nuances of these competitive shifts, the hosts invite listeners to share their experiences and outcomes with Norwegian armies in V3 events. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
This was the Facebook Live edition of "SHW: This Is Our Wrestling" featuring the Voices of Southern Honor Wrestling, bringing us the LIVE SHW70 Pre-Show from The Action Building in Canton, Georgia! Not only is SHW returning for the first show of 2025, but V3 is also celebrating the 200th episode of the podcast!
最近Deepseek火了,不仅让英伟达市值大跌,引发全球关注,还在多国应用榜上夺冠。但随之而来的是质疑和争议:有人说它是套壳,有人声称找到了代码证据。美国政府甚至想制裁它,限制芯片出口。各种说法满天飞,真假难辨。今天聊聊这家公司究竟有多厉害,它的技术原理是什么,美国对中国AI行业的限制会带来什么影响?-时间线-02:01 中国科技崛起的象征,美国围堵的挑战!05:55 Deepseek和其优秀的AI团队11:50 深度学习中的蒸馏技术17:43 Deepseek与OpenAI的关联性23:41 Deepseek的API问题29:38 大模型应用层崛起35:31 中国半导体行业的挑战与机遇41:28 开源战略:Deepseek在西方世界的讨论与影响力47:26 AI公司的谣言与事实53:21 V3模型的工程优化成就与成本节约01:05:15 深度学习与人工智能-互动方式-李自然个人微信:liziran5460
In this episode of “Waking Up With AI,” Katherine Forrest delves into the groundbreaking advancements of AI newcomer DeepSeek's R1 and V3 models. She explores how this Chinese tech company is challenging the status quo and making waves in the AI space. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence
I apologize for adding yet another DeepSeek video to your video queue. During a trip to Tokyo last year, I was told that DeepSeek was the real deal. A cracked team, and perhaps the only ones of significance in China. Since then, I have annoyed the guys on Transistor Radio - our podcast with Dylan Patel and Doug O'Laughlin - into talking about it. Though there was nothing much to be said. In December 2024, DeepSeek released their V3 base model, which had impressive efficiency. A few people in AI were impressed. Then on January 22nd 2025, DeepSeek released their reasoning model, R1, which works kind of like OpenAI's o1 and o3 models. It takes extra compute time to "think" up a better answer. R1's release kicked everything off. The next day, the New York Times published an article on it, but focused mostly on the earlier V3's training costs.
I apologize for adding yet another DeepSeek video to your video queue. During a trip to Tokyo last year, I was told that DeepSeek was the real deal. A cracked team, and perhaps the only ones of significance in China. Since then, I have annoyed the guys on Transistor Radio - our podcast with Dylan Patel and Doug O'Laughlin - into talking about it. Though there was nothing much to be said. In December 2024, DeepSeek released their V3 base model, which had impressive efficiency. A few people in AI were impressed. Then on January 22nd 2025, DeepSeek released their reasoning model, R1, which works kind of like OpenAI's o1 and o3 models. It takes extra compute time to "think" up a better answer. R1's release kicked everything off. The next day, the New York Times published an article on it, but focused mostly on the earlier V3's training costs.
Das ist das KI-Update vom 04.02.2025 unter anderem mit diesen Themen: Softbank-CEO kooperiert mit OpenAI und glaubt an baldige AGI DeepSeek soll V3 mit viel stärkerer Nvidia-Hardware entwickelt haben OpenAI-Chef denkt wieder über Open Source nach und KI-Roboter schlägt Jenga-Blöcke zu 100 Prozent aus Turm Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10263826 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki
In this episode of Tabletop Tommies, Jonny and Phil delve into the armies of Italy within the V3 version of the game. As they explore the defensive strategies and unique national characteristics of Italy, they highlight the new and exciting options available to players. From mobile artillery potential and the interesting interactions between units, to comparing these forces to other nations like France, it's an episode packed with insights into Italy's expanded gameplay. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
V3 Conservatory Poet Wendy Kieffer joins LAB the Podcast to share and discuss Christian Wiman's poem, “Prayer.” “Prayer” was the right poem as we continue our conversation highlighting the work to fight human trafficking and care for survivors through V3's LAB Initiative. Learn more about the work of V3.Support the show
I sat down with Corbin, who just took delivery of a 2025 Hyundai Ioniq 5 with native NACS, to hear about his 2,000-mile road trip across the country. He tested Tesla Superchargers, Electrify America, EVGO/Pilot, Rivian Adventure Network, and even Ionna stations, facing both smooth sessions and unexpected issues along the way. With Tesla's 500V limit on V3 cabinets, using NACS on an 800V EV isn't as easy as it sounds.Thank You Corbin, check out his channel and roadtrip video here: https://www.youtube.com/watch?v=uZVaVgVOvj0Ioniq 5 versus the Model 3: https://www.youtube.com/watch?v=U7F0h1jXlMAShoutout to our sponsors for more information find their links below:- Fort Collins Kia: Visit focokia.com for full details. Disclaimer: *Delivery covers up to $1,000.Find us on all of these places:YouTube: https://www.youtube.com/outofspecpodcastApple Podcasts: https://podcasts.apple.com/us/podcast/out-of-spec-podcast/id1576636119Spotify: https://open.spotify.com/show/0tKIQfKL9oaHc1DLOTWvbdAmazon: https://music.amazon.com/podcasts/473692b9-05b9-41f9-9b38-9f86fbdabee7/OUT-OF-SPEC-PODCASTFor further inquiries please email podcast@outofspecstudios.com Hosted on Acast. See acast.com/privacy for more information.
Check Out Our Out of Spec Data: https://outofspecstudios.com/chargingTesla has officially uncorked all V4 Supercharger posts in North America, allowing the Cybertruck to charge at up to 325kW! This is made possible by increasing amperage to 900A, thanks to the improved cooling of V4 dispensers. However, these chargers are still using V3 cabinets, meaning they remain limited to 500V output.What does this mean for Cybertruck owners? Will other Tesla vehicles see faster charging speeds? And when will true V4 cabinets with 1000V and 500kW arrive? Join me as I break down everything you need to know about Tesla's latest Supercharger upgrade!Shoutout to our sponsors for more information find their links below:- Fort Collins Kia: Visit focokia.com for full details. Disclaimer: *Delivery covers up to $1,000.Find us on all of these places:YouTube: https://www.youtube.com/outofspecpodcastApple Podcasts: https://podcasts.apple.com/us/podcast/out-of-spec-podcast/id1576636119Spotify: https://open.spotify.com/show/0tKIQfKL9oaHc1DLOTWvbdAmazon: https://music.amazon.com/podcasts/473692b9-05b9-41f9-9b38-9f86fbdabee7/OUT-OF-SPEC-PODCASTFor further inquiries please email podcast@outofspecstudios.com Hosted on Acast. See acast.com/privacy for more information.
Virtuals Protocol is leading the crypto AI agent space and expanding to Solana, marking the beginning of its growth. This new pool opens up fresh opportunities for trading and engagement, adding liquidity to the platform and expanding its reach. Alongside this, Virtuals Protocol is introducing the Strategic SOL Reserve (SSR). Meanwhile, DeepSeek AI's release hit crypto markets hard, pulling down tokens across the board. Is this a temporary dip or the beginning of a major market reset?~This episode is sponsored by Tangem~Tangem ➜ https://bit.ly/TangemPBNUse Code: "PBN" for Additional Discounts!00:00 Intro00:08 Sponsor: Tangem00:48 Deepseek vs Crypto01:29 VC's react to DeepSeek03:00 Trump says Deepseek is a good thing03:25 META vs NVDIA chart04:07 Virtuals moves to Solana05:02 Virtuals: biggest A.I. venture capital on earth?05:55 Autonomous Business coming to virtuals06:40 Byte x Luna07:08 Restaurants accepting A.I. Orders07:58 Raposa A.I. Agent08:27 Raposa CEO on Kitsune09:15 BASE has disaster roundtable with devs 10:48 Coinbase rugs VIRTUAL + Community12:41 Abstract launches13:43 Axelar promised us Flow + Solana first14:09 Axelar chose Flow14:45 Flow liquidity for Solana tokens15:00 Flow using eliza15:27 Eliza marketplaces coming to Solana16:08 G.A.M.E. requests16:30 A.I. Agents in Roblox17:27 $VIRTUAL chart18:08 Outro#crypto #solana #deepseek~Virtuals Moves To Solana
When we last spoke with Chris Currie — the man behind Ministry Cycles and the striking Psalm 150 frame — he had just sent a prototype frame off for lab testing, hoping to move into production if all went to plan. Unfortunately, things didn't work out that way, but Chris made some design changes and is still working toward offering frames for sale.With the latest V3 frame off for testing, it was a good time to check back in with Chris to hear all about what's happened over the last two years to get here; what goes into lab testing & why it's important; what he'd do differently with the benefit of hindsight; and a whole lot more.RELATED LINKS:Ministry Cycles on Suspension Design, Machining Frames, & Launching a Bike Company (Ep.157)BLISTER+ Get Yourself CoveredJoin Us! Blister Summit 2025TOPICS & TIMES:The Psalm 150 (2:56)Lab testing the earlier prototypes (4:51)What goes into lab testing? (8:42)The limitations of computer modeling & importance of physical testing (11:49)Refinements of the V3 frame (18:42)The pros and cons of various construction methods (26:13)Bike industry struggles going into 2025 (35:34)20/20 hindsight & the path to the V3 frame (43:18)Welded front triangle versions (49:29)CHECK OUT OUR OTHER PODCASTS:Blister CinematicCRAFTEDGEAR:30Blister PodcastOff The Couch Hosted on Acast. See acast.com/privacy for more information.
WARNING: NSFW Content The Alpha & Omega of competitive Bolt Action discussion! In this Episode, Dan, Pete, Toph & Spurlie are joined by none other than the Mr John Ross to talk about our first experiences playing V3 competitively. Please give us all of the feedback at the following locations: Email: juggernautwargaming@gmail.com Discord: https://discord.gg/HfFGCdx2H3 Like & Comment: https://www.facebook.com/juggernautwargaming Buy us a drink!: https://www.patreon.com/juggernautwargaming
Sponsorships and applications for the AI Engineer Summit in NYC are live! (Speaker CFPs have closed) If you are building AI agents or leading teams of AI Engineers, this will be the single highest-signal conference of the year for you.Right after Christmas, the Chinese Whale Bros ended 2024 by dropping the last big model launch of the year: DeepSeek v3. Right now on LM Arena, DeepSeek v3 has a score of 1319, right under the full o1 model, Gemini 2, and 4o latest. This makes it the best open weights model in the world in January 2025.There has been a big recent trend in Chinese labs releasing very large open weights models, with TenCent releasing Hunyuan-Large in November and Hailuo releasing MiniMax-Text this week, both over 400B in size. However these extra-large language models are very difficult to serve.Baseten was the first of the Inference neocloud startups to get DeepSeek V3 online, because of their H200 clusters, their close collaboration with the DeepSeek team and early support of SGLang, a relatively new VLLM alternative that is also used at frontier labs like X.ai. Each H200 has 141 GB of VRAM with 4.8 TB per second of bandwidth, meaning that you can use 8 H200's in a node to inference DeepSeek v3 in FP8, taking into account KV Cache needs. We have been close to Baseten since Sarah Guo introduced Amir Haghighat to swyx, and they supported the very first Latent Space Demo Day in San Francisco, which was effectively the trial run for swyx and Alessio to work together! Since then, Philip Kiely also led a well attended workshop on TensorRT LLM at the 2024 World's Fair. We worked with him to get two of their best representatives, Amir and Lead Model Performance Engineer Yineng Zhang, to discuss DeepSeek, SGLang, and everything they have learned running Mission Critical Inference workloads at scale for some of the largest AI products in the world.The Three Pillars of Mission Critical InferenceWe initially planned to focus the conversation on SGLang, but Amir and Yineng were quick to correct us that the choice of inference framework is only the simplest, first choice of 3 things you need for production inference at scale:“I think it takes three things, and each of them individually is necessary but not sufficient: * Performance at the model level: how fast are you running this one model running on a single GPU, let's say. The framework that you use there can, can matter. The techniques that you use there can matter. The MLA technique, for example, that Yineng mentioned, or the CUDA kernels that are being used. But there's also techniques being used at a higher level, things like speculative decoding with draft models or with Medusa heads. And these are implemented in the different frameworks, or you can even implement it yourself, but they're not necessarily tied to a single framework. But using speculative decoding gets you massive upside when it comes to being able to handle high throughput. But that's not enough. Invariably, that one model running on a single GPU, let's say, is going to get too much traffic that it cannot handle.* Horizontal scaling at the cluster/region level: And at that point, you need to horizontally scale it. That's not an ML problem. That's not a PyTorch problem. That's an infrastructure problem. How quickly do you go from, a single replica of that model to 5, to 10, to 100. And so that's the second, that's the second pillar that is necessary for running these machine critical inference workloads.And what does it take to do that? It takes, some people are like, Oh, You just need Kubernetes and Kubernetes has an autoscaler and that just works. That doesn't work for, for these kinds of mission critical inference workloads. And you end up catching yourself wanting to bit by bit to rebuild those infrastructure pieces from scratch. This has been our experience. * And then going even a layer beyond that, Kubernetes runs in a single. cluster. It's a single cluster. It's a single region tied to a single region. And when it comes to inference workloads and needing GPUs more and more, you know, we're seeing this that you cannot meet the demand inside of a single region. A single cloud's a single region. In other words, a single model might want to horizontally scale up to 200 replicas, each of which is, let's say, 2H100s or 4H100s or even a full node, you run into limits of the capacity inside of that one region. And what we had to build to get around that was the ability to have a single model have replicas across different regions. So, you know, there are models on Baseten today that have 50 replicas in GCP East and, 80 replicas in AWS West and Oracle in London, etc.* Developer experience for Compound AI Systems: The final one is wrapping the power of the first two pillars in a very good developer experience to be able to afford certain workflows like the ones that I mentioned, around multi step, multi model inference workloads, because more and more we're seeing that the market is moving towards those that the needs are generally in these sort of more complex workflows. We think they said it very well.Show Notes* Amir Haghighat, Co-Founder, Baseten* Yineng Zhang, Lead Software Engineer, Model Performance, BasetenFull YouTube EpisodePlease like and subscribe!Timestamps* 00:00 Introduction and Latest AI Model Launch* 00:11 DeepSeek v3: Specifications and Achievements* 03:10 Latent Space Podcast: Special Guests Introduction* 04:12 DeepSeek v3: Technical Insights* 11:14 Quantization and Model Performance* 16:19 MOE Models: Trends and Challenges* 18:53 Baseten's Inference Service and Pricing* 31:13 Optimization for DeepSeek* 31:45 Three Pillars of Mission Critical Inference Workloads* 32:39 Scaling Beyond Single GPU* 33:09 Challenges with Kubernetes and Infrastructure* 33:40 Multi-Region Scaling Solutions* 35:34 SG Lang: A New Framework* 38:52 Key Techniques Behind SG Lang* 48:27 Speculative Decoding and Performance* 49:54 Future of Fine-Tuning and RLHF* 01:00:28 Baseten's V3 and Industry TrendsBaseten's previous TensorRT LLM workshop: Get full access to Latent Space at www.latent.space/subscribe
LAB Initiative Director, Christina Kruse joins LAB the Podcast to celebrate the generosity of Buddy Brew on the back side of 2024 Freedom Roast sales. We talk Human Trafficking Awareness month and V3's 2025 efforts to care for survivors and fight human trafficking.Freedom RoastLearn more about the work of V3Support the show
I John 5:1-5 5 Everyone who believes that Jesus is the Christ has been born of God, and everyone who loves the Father loves whoever has been born of him.2 By this we know that we love the children of God, when we love God and obey his commandments.3 For this is the love of God, that we keep his commandments. And his commandments are not burdensome.4 For everyone who has been born of God overcomes the world. And this is the victory that has overcome the world—our faith.5 Who is it that overcomes the world except the one who believes that Jesus is the Son of God? Our theme for 2025: VICTORIOUS Confidence or Condemnation 3 Signs of Confidence in Victory 1. The first sign is theological. (V.1, 10) 2. The second sign is moral. (V3) 3. The third sign is social (2)
Welcome to Tabletop Tommies, where Jonny and Phil delve into the might of Finland's armies within the latest version of bolt action, V3. In this exciting episode, we uncover why Finland stands as the most formidable minor power with their new units and rules. With Germany's armies recently released, our hosts explore whether the Finns possess the strength to combat them effectively. We detail the unchanged yet powerful national characteristics of Finnish units, delving into strategies involving Trained Huntsmen, Sisu, and Motti rules. Discover tactical insights into using Finnish infantry, the translation of v2 units into v3, and the new engineering squad's unique abilities. Our exploration doesn't stop there; the episode also considers the role of Finnish transport and artillery, evaluating heavy mortars, and introducing the exciting Komsomolets vehicle changes. Lastly, join us as we strategize on potential builds and share ideas for crafting a competitive, thematic Finnish force for upcoming tournaments. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
Are we on the verge of an economic transformation? This week, Autonomous Technology and Robotics Director of Research Sam Korus and Associate Portfolio Manager Nick Grous are joined by ARK Chief Futurist Brett Winton to discuss ambitious projections for global GDP growth, driven by technological advancements and innovations such as Robotaxis and AI. They explore the historical context of economic growth, the potential for significant productivity increases, and the implications for different regions, particularly the U.S. and Europe. The conversation then shifts to SpaceX's advancements in satellite technology, highlighting the impressive capabilities of the new V3 satellites and their potential to revolutionize global connectivity.If you know ARK, then you probably know about our long-term research projections, like estimating where we will be 5-10 years from now! But just because we are long-term investors, doesn't mean we don't have strong views and opinions on breaking news. In fact, we discuss and debate this every day. So now we're sharing some of these internal discussions with you in our new video series, “The Brainstorm”, a co-production from ARK and Public.com. Tune in every week as we react to the latest in innovation. Here and there we'll be joined by special guests, but ultimately this is our chance to join the conversation and share ARK's quick takes on what's going on in tech today.Key Points From This Episode:Technological advancements are expected to drive significant economic transformation.Historical context shows that periods of growth are often followed by technological infusions.SpaceX's new V3 satellites will dramatically increase bandwidth and reduce costs.For more updates on Public.com:Website: https://public.com/YouTube: @publicinvestTwitter: https://twitter.com/public
Preston_EZ_BlueZ_Shame_penguin_V3 by WNHH Community Radio
In this episode of Tabletop Tommies, Jonny and Phil embark on a journey to explore the evolution of the French army as they transition from V2 to V3 rules. Following their discussion on the Belgian army, they delve into the intricacies of the French forces, uncovering what remains and what has changed. Dive into an in-depth analysis of the French army's special rules, units, and their strategic applications in V3. Discover how traditional elements such as forward artillery doctrine and motivated reserves have transformed, and learn about the new dynamics of the French troops, from the foreign legion to the Senegalese fighters. Join the conversation as they weigh the pros and cons of different unit strategies, including the role of char tanks, cavalry mopping up operations, and the potential game-changing effects of the FAO. If you're curious about building an effective French V3 army or if you're pondering how to adapt your existing strategies, this episode offers valuable insights and tips. Listen in to refine your approach and enhance your gameplay. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
Welcome to Tabletop Tommies, the podcast where hosts Jonny and Phil delve into the strategic depth of tabletop armies. In this episode, they turn their focus to the armies of Bulgaria, exploring how rules and equipment have evolved from V2 to V3. Discussing Bulgaria's unique characteristics, they analyze key rule changes such as 'Axis Support,' which now alters how Bulgarian forces integrate German or Italian units. They explore the lingering effects of Bulgaria's anti-partisan tactics and how the refinement of these rules affects deployment and battlefield strategy. Jonny and Phil scrutinize the impact of these changes on gameplay, with detailed comparisons between past and present iterations of the Bulgarian army. This episode serves both as an insightful resource for dedicated players familiar with these armies, and a fascinating exploration for new listeners intrigued by the complexities of strategy games. From infantry options and artillery additions to bizarre rule clarifications, Jonny and Phil bring to life the battlefield dynamics of Bulgaria, posing questions that challenge conventional tactical approaches and inviting listeners to think critically about their next strategic moves. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
Welcome to Tabletop Tommies! In today's episode, Johnny and Phil embark on the "Armies Of" series for version 3, beginning with the Belgian nation. Before diving into the intricacies of Belgium's army lists and national characteristics, they engage in a lively discussion about their recent hobby projects. Phil shares insights on building U.S. Airborne troops inspired by his trip to Bastogne, while Johnny discusses his ambitious Indiana Jones-themed desert table project. As they explore the revised rules for Belgium in V3, listeners will learn about the updated national characteristics, the disappearance of free units, and new opportunities with forward artillery doctrine. The conversation also delves into unit options, tanks, armored cars, and the unique elements Belgium brings to the tabletop. Whether you're looking for inspiration for your next hobby project or insights into Belgium's strategic gameplay, this episode offers a detailed and engaging overview. Join the Tabletop Tommies for an in-depth analysis of Belgium's V3 army rules, hobby updates, and creative tabletop strategies. Want to support the channel? Why not use one of our affiliate links: Firestorm Games: https://www.firestormgames.co.uk/wargames-miniatures/bolt-action?aff=64a025ee621f1 Wayland Games: https://affiliates.waylandgames.co.uk/1240.html Warlord Games: https://r.warlordgames.com/aff/?TABLETOPTOMMIES You can also support our endeavour to produce Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Or you can support these two mugs by buying a fancy mug: https://tabletoptommies.com/collection/new/
WARNING: NSFW Content The Alpha & Omega of competitive Bolt Action discussion! In this Episode... oh who cares. V3 is finally here!!! Please give us all of the feedback at the following locations: Email: juggernautwargaming@gmail.com Discord: https://discord.gg/HfFGCdx2H3 Like & Comment: https://www.facebook.com/juggernautwargaming Buy us a drink!: https://www.patreon.com/juggernautwargaming
Ekstasis Founder, Conor Sweetman, joins LAB the Podcast to explore, “A Time for Wonder: The Days of Advent Through the Eyes of Worship.” Join us for the conversation and for our annual V3 Advent reading.Don't miss this wonderful conversation and here's to a wonderful holiday season ahead!Join us live every evening, 12/6-12/26 at 9pm EST as we journey through, “A Time for Wonder” with the V3 team. V3 AdventDigital copies available here: LINKSupport the show
We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's
V3 conservatory poet, Wendy Kieffer, joins LAB the Podcast to share her original poem, The Summons. Join us for the conversation!For more of Wendy's work, order Alchemy of Praise hereSupport the show
After building nodes, climbing roofs, swapping antennas, and even some war driving, it's time for our Meshtastic deep dive!Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks: