Podcasts about Adept

  • 432PODCASTS
  • 702EPISODES
  • 49mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 11, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Adept

Latest podcast episodes about Adept

Highways Voices
The road safety crisis on our roads, and how to fix it - Jo Shiner on this week's Highways Voices

Highways Voices

Play Episode Listen Later Jun 11, 2025 23:21


Why aren't we more outraged that 1,700 people still die on UK roads each year?Today on Highways Voices we talk road safety because, despite having some of the safest roads globally, road deaths in the UK have plateaued, and efforts to reduce them, through enforcement, awareness, and education, aren't achieving meaningful impact.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!Sussex Police Chief Constable Jo Shiner, who's head of Britain's roads policing, is our guest and gives us a real insight into her thoughts on keeping our roads safe, and indeed making them safer.She explains why a National Road Safety Board and collision investigation system could revolutionise how we tackle fatalities, gives her views on how emerging enforcement technologies like AI-powered cameras and in-vehicle sensors could eliminate risky behaviour before it becomes fatal, and discusses the cultural and legislative shifts needed to elevate road safety from afterthought to national priority.If you're worried about road safety, this is a must listen episode for you!Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Eye On Sci-Fi Podcast
Episode (254) Sci-Fi Fantasy Short THE ADEPT

Eye On Sci-Fi Podcast

Play Episode Listen Later Jun 9, 2025 3:52


Episode 254 of the EYE ON SCI-FI podcast introduces 'The Adept,' an indie sci-fi fantasy short film by Canadian visual effects artist and composer Adam Stern, also known for the hit short film 'FTL.' The film explores the story of a married scientist couple, Ben and Maddie, who uncover a powerful force that blends science and magic. #scifishort #scifi #fantasySubscribe to the podcast via RSS, Apple Podcasts, Pocket Casts or Amazon Music.To subscribe to the newsletter, explore the podcast archive, support the podcast, and more, visit EYE ON SCI-FI Link Tree.Episode Link:Watch sci-fi/fantasy short: THE ADEPTInterview with ‘The Adept' creator, VFX professional and composer, Adam Stern

Hivemind Radio Recap
Uncharted Territories | Hivemind May 2025 Recap (Sleep Token, Sleep Theory, NOVELISTS & more)

Hivemind Radio Recap

Play Episode Listen Later Jun 9, 2025 189:17


May 2025 was a historic month for the scene for one main reason: Sleep Token's brand new album Even In Arcadia (2:03:17) broke into the mainstream and charted #1 on Billboard. But, in this episode, we also touch on many other fantastic releases from the month including other EPs and albums from Nerv (1:46:10), Phantom Elite (1:53:21), Acres (1:59:10), Sleep Theory (2:38:50), NOVELISTS (2:51:10) and SENNA (2:59:26) plus singles from Dance Gavin Dance, Adept, The Home Team, Honey Revenge, The Amity Affliction, grandson and more!All music discussed this month can be found in this playlist: https://open.spotify.com/playlist/1yivSK8aOGkYpvSgwWthWS?si=8855aaa0618d4b5dFollow us:https://www.tiktok.com/@hivemindradiohttps://twitter.com/hivemindradio_https://www.instagram.com/hivemindradio_https://linktr.ee/hivemindradioOutro Theme song licensed from slip.stream:Track: "Lock And Load"Music provided by https://slip.streamFree Download / Stream: https://slip.stream/tracks/49ecc33e-33f1-45b0-88e7-3fe71f4fc951?utm_source=attributionThanks for listening!

Adepting To Change
Episode #24 - Quarterly Review

Adepting To Change

Play Episode Listen Later Jun 6, 2025 24:17


Alex is joined once again by Anna Howard, as they chat through Q1 of Adept in 2025. Theres the first winners of the year of the quarterly A-Grade Awards, the highs and lows of contract wins and losses, plus a BIG month for our MD Nick Adepts Linkedin - www.linkedin.com/company/adept-corporate-services Adept - www.adeptcorporateservices.co.uk Anna's linkedin - www.linkedin.com/in/anna-howard-b15859316/ Alex's Linkedin - www.linkedin.com/in/alex-mcmahon-bb107877/

Let's Talk Landscape - Der grüne Podcast von hochC Landschaftsarchitekten
#87: Sprünge in Raum und Maßstab – mit Tanja Jauernig (ADEPT)

Let's Talk Landscape - Der grüne Podcast von hochC Landschaftsarchitekten

Play Episode Listen Later Jun 5, 2025 43:22


Diesmal wird es sprunghaft! Wir sprechen über den bewussten Wechsel zwischen Planungsdisziplinen, Standorten und Maßstäben. Da ADEPT in zwei Ländern und mit einem in vielerlei Hinsicht diverses Team die Projekte bearbeitet, haben wir die passende Gesprächspartnerin für dieses Thema gefunden. Welche Unterschiede gibt es zwischen Dänemark und Deutschland sowohl in der Planung als auch in den Wünschen der späteren Nutzer*innen? Was können wir beim Thema Nachhaltigkeit für unsere Planungen mitnehmen? Gewinnen Projekte an Qualität, wenn sie interdisziplinär bearbeitet werden?Luisa Balz und Claus Herrmann sprechen mit ihr auch über die Schnittstelle innen zu außen und welche Vor- bzw. Nachteile Zusammenarbeit innerhalb eines Büros oder zwischen spezialisierten Büros hat. Diese Folge wurde live im Rahmen des Hamburger Städtebauseminars aufgezeichnet. Am Abend selbst haben wir vor und nach der Aufnahme die Interaktion mit dem Publikum genossen, daher ist diese Folge etwas kürzer als gewöhnlich.Tanja Jauernig hat Stadtplanung in Hamburg studiert, als Projektleiterin im Büro Luchterhandt gearbeitet und ist seit 2018 Teil von ADEPT. Seit 2021 ist sie dort Associate Partner im Hamburger Standort. ADEPT hat Standorte in Kopenhagen und Hamburg. Sie arbeiten mit einem breiten Spektrum an architektonischen Disziplinen, die sich über verschiedene Maßstäbe und Fachgebiete erstrecken - von Stadtplanung und strategischer Entwicklung bis hin zu Gebäudearchitektur, Landschaft und öffentlichem Raum.Let's Talk Landscape befasst sich mit Inhalten rund um Landschaftsarchitektur und richtet sich an die Fachöffentlichkeit und alle, die sich für Stadtgestaltung interessieren. Unser Leitbild Gemeinsam.Nachhaltig.Gestalten führt uns durch vielfältige Themen und bringt uns jeden ersten Donnerstag im Monat zu spannenden Gesprächen mit interessanten Gästen.

Highways Voices
Beyond the car: CoMoUK CEO Richard Dilkes on why shared mobility tackles congestion one ride at a time

Highways Voices

Play Episode Listen Later Jun 3, 2025 38:44


Are shared transport solutions the missing piece in solving our congestion, infrastructure, and sustainability puzzle?As our industry constantly grapples with how to reduce car dependency, meet environmental goals, and deliver equitable, cost-effective mobility, we look at one solution today with Richard Dilkes, CEO of CoMoUK – the national charity for shared transport.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!In this Highways Voices podcast he explains why shared transport, ranging from car clubs and e-scooters to digital demand-responsive transit, is not just a trend but a strategic tool for modern transport planning, explaining how this approach intersects with public policy, urban design, and real-world user behaviour, aiming to deliver healthier, more connected communities.You'll hear how shared mobility schemes are reshaping the transport ecosystem, but that even in London, there are still many gaps. He talks about e-bikes and e-scooters, their place in the puzzle and how safety issues are being addressed, and he and host Paul Hutton talk mobility hubs too, and how they could make it easier to take hassle-free, quick and convenient car-free journeys.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Horror 101 Podcast
Episode 158: Horror 101 - Episode 158: Ghoulies

Horror 101 Podcast

Play Episode Listen Later May 31, 2025 62:26


We're going back to the 80s to spotlight a low budget horror from The Empire days of legendary producer Charles Band.  We hope you enjoy as we give the horror 101 treatment to Luca Bercovici's Ghoulies.  Show Highlights:01:00 Prelude to Terror...05:15  Before Full Moon....07:55  You have a Big Hit on your hands...09:35  PG-13!?14:00  Ritual Infanticide...17:05  Inheritance...19:10  First Party...23:20  Becoming an Adept...26:00  Weird Sexy Time...27:55  Summoned Dwarves...31:55  Resurrection Ritual...34:23  Malcolm's Ghoulie Massacre...43:50  Black Magic Showdown...47:45  The Ending...51:00  Scoring the film...59:00  Conclusion!  Thanks for lIstening!

Monument Techno Podcast
MNMT Recordings: Formant Value (live) — Omen Wapta Weekender, Garage Noord 2025

Monument Techno Podcast

Play Episode Listen Later May 26, 2025 54:05


Recorded during the early hours of Sunday morning at Garage Noord at the Omen Wapta Weekender, this live set from Formant Value captures the essence of a weekend that transcended the club experience—where sound, space, and shared energy merged into something truly transformative. Opening in slow motion, the set unfolds with a deep, psychedelic ambient current—echoes, pulses, and dub textures forming a wide sonic horizon. What follows is a steady, organic rise into tribal techno rhythms and hi-tech structures, maintaining a sense of spaciousness even at its most kinetic. It's a set that mirrors the architecture of the night itself: meditative, then elevating, then dissolving again. There are no sharp corners—just flow. Time felt suspended. Movement turned internal. And in the foggy warmth of the dancefloor, listeners were gently guided through a collective dreamspace. Formant Value is known for his unique & sophisticated sound, that defies easy categorisation whilst nodding in the direction of downtempo, dub and IDM, with elements of D'n'B, trance and techno. His music has been released by a number of labels including Lowless Records, Well Street Records, Rgd tribe and Annulled Music. Adept at sustaining the fine tension between dancefloor energy and audiophile precision of intent home listening, Formant Value brings a uniquely fresh take on contemporary club music. Follow: https://soundcloud.com/formant-value https://www.instagram.com/formantvalue/ https://linktr.ee/formantvalue

Highways Voices
ITS European Congress 2025: UK Pavilion "pitwalk"

Highways Voices

Play Episode Listen Later May 20, 2025 33:35


On today's Highways Voices you'll learn about solar-powered surveillance, AI-driven traffic modelling, ghost plate detection and lots of other technologies helping transform our highways.We're on the UK Pavilion at the ITS European Congress in Seville talking about innovations being showcased by a range of UK SMEs that are tackling the daily challenges we face.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!Whether it's avoiding gridlock from mismanaged roadworks or combatting the rise in fraudulent number plates, these technologies are not just promising—they're in action.Host Paul Hutton tours the Pavilion to talk to Immense, Now Wireless, AECOM, ITS UK, AGD Systems/MAV, Nicander, Agilysis, AIM, WJ, VESOS and The ITS World Congress 2027.You'll hear how AI-powered simulation tools are helping authorities prevent traffic jams before they happens, learn how new structural monitoring and drone-assisted asset management systems are saving millions and preventing closures and, of course, hear how global ITS partnerships are positioning the UK as a leader ahead of the 2027 World Congress in Birmingham—and what that means for your future projects.Hit play now to hear firsthand how tomorrow's highway solutions are already driving results across the UK and beyond.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

The State of the Scene (SOTS Podcast)
Lorna Shore RETURN, I Prevail PART WAYS with singer, the MYSTERIOUS President | SOTS Podcast 5/19/25

The State of the Scene (SOTS Podcast)

Play Episode Listen Later May 19, 2025 124:30


This week Sam and Marcos welcome back deathcore titans Lorna Shore, discuss the separation of I Prevail and Brian Burkheiser, who or what is President(?), Pierce The Veil's insane setlist for new tour, Anthony Fantano murders Sleep Token, Adept return, reviews of new albums from Bury Tomorrow, The Callous Daoboys, Arm's Length, and Sleep Theory plus much more! News: I Prevail and Brian Burkheiser break-up, Sleep Token keeps topping the charts, Trivium and Bullet For My Valentine tour comes to a strange end, Pierce The Veil's insane setlist and more (7:19). Spotlight: Outsider Heart starting at (44:50).  New Music: Lorna Shore, President, Adept, The Rasmus, De'Wayne, and Beauty School Dropout starting at (52:31). Reviews: Bury Tomorrow (1:25:56), Arm's Length (1:36:04), The Callous Daoboys (1:54:13), and Sleep Theory.  Become a Patron to gain early access and exclusive benefits! Patreon: https://www.patreon.com/Sotspodcast Playlist: https://open.spotify.com/playlist/0jp0fpudUz7gvu0SFaXhK3?si=6cddbd5b63564c9a Youtube: https://www.youtube.com/@sotspod Discord: https://discord.com/invite/3egU3Dk Merch: https://www.sotspodcast.com/merch Twitter: https://twitter.com/SOTSPodcast Facebook: https://www.facebook.com/sotspodcast Instagram: https://www.instagram.com/sotspodcast TikTok: https://www.tiktok.com/@sotspodcast  Threads: https://www.threads.net/@sotspodcast?hl=en Bluesky: https://bsky.app/profile/sotspodcast.bsky.social

Highways Voices
ITS European Congress Day 1: Delegates experience smarter data and is revolutionising mobility

Highways Voices

Play Episode Listen Later May 19, 2025 32:02


We're in Seville for the ITS European Congress 2025, discovering how smarter data sharing, seamless standards, and automation are transforming how cities and nations manage traffic and mobility.This episode dives into how cities, governments, and private sector leaders are tackling today's biggest mobility challenges through collaboration, innovation, and smarter infrastructure strategies.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!You'll discover how better data sharing and common standards are unlocking the latest predictive traffic modelling and dynamic traffic solutions across cities and countries and hear exclusive insights from UK and European transport leaders on how AI, autonomous mobility, and digital infrastructure are shaping future transport policy.You'll also get a behind-the-scenes look at one of the world's most advanced Amazon distribution centres, revealing transferable lessons for logistics, automation, and traffic control.Tap play now to gain a competitive edge from Europe's top mobility minds and be inspired by innovations you can adapt for your own transport and highway strategies.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Highways Voices
Graffiti-proof and future-ready: Smart fixes for ageing concrete and costly cleanups

Highways Voices

Play Episode Listen Later May 13, 2025 21:44


What if you could slash graffiti removal costs and dramatically cut maintenance downtime on your concrete structures, without compromising durability or environmental compliance?That's what this Highways Voices is all about, as we talk about a subject that isn't often discussed, but should be - graffiti on road infrastructure, plus our ageing concrete bridges and flyovers.We're talking to Fosroc, a leading international manufacturer and supplier of high performance chemicals for the construction industry, with a particular focus on concrete and cement.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!In this edition, Adrian Tatum is joined by Andy Hatch, National Concrete Repair Specification Manager at Fosroc. You'll hear him explaining how he and his colleagues are tackling these problems with innovative, field-tested solutions including a revolutionary non-sacrificial anti-graffiti coating that withstands up to 20 cleans, and a rapid-curing concrete repair material. You'll also find out about the company's full lifecycle approach - from lab R&D to on-site support - to deliver long-lasting protection for critical highway assets.Press play now to hear how you can future-proof your infrastructure with solutions designed to save money, reduce downtime, and protect your assets for the long haul.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Highways Voices
Buses, Power, and People: Minister Simon Lightwood joins Highways Voices to champion public transport and inclusion

Highways Voices

Play Episode Listen Later May 6, 2025 26:50


The Local Transport Minister pays tribute to the bus, and those who drive them, on today's Highways Voices as he discusses making services so good people will choose them over driving their own cars.Simon Lightwood is one of our guests on this week's podcast recorded at a Women in Bus and Coach event at the London Transport Museum to celebrate Jill Viner, the capital's first female bus driver.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!Mr Lightwood talks about how, across the UK, public transport is undergoing a transformation, but that major roadblocks remain, from outdated infrastructure to a lack of workforce diversity.This episode dives into the real-world strategies that can make bus and coach services more inclusive, efficient, and trusted—key priorities for transport leaders and technology professionals seeking long-term modal shift and user confidence.He discusses offering a carrot rather than stick approach to getting people out of their cars, and also tells of his absolute commitment to giving local authorities power over their transport networks..Also on the podcast you'll hear Women in Bus and Coach founder Louise Cheeseman, and Transport for London Commissioner Andy Lord and COO Claire Mann, who discuss why designing transport systems "for everyone" boosts recruitment, safety, and customer satisfaction, and how TfL is setting a global example in inclusive hiring, active travel integration, and adaptive operations.Highways Voices promises keynote-quality speakers into your phone or laptop and we've done it four times over on this episode. Press play now to hear them!As promised in the podcast, here is a list of Simon Lightwood's responsibilities as Local Transport Minister: local transport (buses, taxis, private hire vehicles, light rail) local transport decarbonisation local transport accessibility, and cross-cutting transport accessibility tackling violence against women and girls on the transport network active travel e-scooters modal shift regions and devolution the department's relationship with London, including Transport for London transport connectivity across the union integrated transport strategyHighways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS...

Highways Voices
Smart Moves: How Europe's Transport Future Is Taking Shape at the ITS Congress in Seville

Highways Voices

Play Episode Listen Later Apr 29, 2025 26:45


Are you ready to see how innovation and cutting-edge solutions are reshaping the future of transport across Europe?As transport networks evolve under pressure to decarbonise and modernise, decision-makers face the challenge of balancing ambitious sustainability goals with real-world infrastructure and social needs.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!In this episode of Highways Voices, you'll discover how the ITS European Congress in Seville is spotlighting practical strategies — from bi-directional vehicle-to-grid technology to large-scale citizen-centred mobility pilots — that are crucial for overcoming today's transition hurdles.You'll gain insights into how collaborative, empathy-driven design is redefining sustainable urban mobility, understand how bi-directional charging and energy grid optimisation are reshaping traffic management KPIs and discover large-scale, European projects like MetaCase and the City Moonshot that are setting new benchmarks for electric and autonomous transport solutions.Highways Voices will be reporting from the ITS European Congress from 19-21 May, so hit play now to hear a preview discussing the real-world innovations and strategic insights that will shape the future of highways and smart mobility at Seville's landmark event.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Highways Voices
Building trust in AI: Brightly Software on how Intelligent Inspections are revolutionising highway maintenance

Highways Voices

Play Episode Listen Later Apr 23, 2025 20:13


Today on Highways Voices we look at build trust in artificial intelligence for use in the Highways Sector.With shrinking budgets and rising demands from both the public and government, local authorities are under immense pressure to deliver smarter, faster, and more accurate highway maintenance. So today, we look at how AI-driven inspections and integrated asset management systems are not only meeting these demands but redefining what's possible for councils and road managers.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!Our guests are Connell McLaughlin, CEO of Route Reports and Mark Rowe, Strategic Consultant at Brightly Software.They tell interviewer Adrian Tatum that we can be confident that auto automated inspections provide transparent, verifiable results that rival traditional manual surveys, and that the kit pays for itself with both proactive planning and reactive responses – saving time, money, and carbon. But they warn this is only going to work if it's a joined-up system meaning real-time collaboration across departments, and so better decisions with clearer insights.Tap play now to hear how AI and integrated systems are helping highway authorities make every pound – and every pothole – count.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

StreetsTalksTo
StreetTalksTo: Droit

StreetsTalksTo

Play Episode Listen Later Apr 22, 2025 24:31


In this episode of #StreetsTalksTo we speak with Brock Arnason,  Founder and Chief Executive Officer and Somerset Pheasant Chief Strategy Officer for Droit. We delve into the intricate world of financial regulation and the technological solutions offered by Droit. The core theme revolves around the wave of regulatory changes that financial institutions face globally, stemming from the post-2008 financial crisis. These regulations, including Dodd-Frank and MiFID, necessitate rapid, accurate, and defensible decision-making, a challenge Droit addresses through its Adept platform. Droit's solution centers on providing a technology-driven approach to computational law, enabling institutions to automate regulatory compliance. The Adept platform processes millions of daily inquiries, ensuring transactions adhere to legal requirements with sub-millisecond latency. A key aspect of Droit's methodology is fostering consensus through initiatives like the Endoxa Consortium. This collaborative approach allows industry practitioners to define best practices and mutualise the cost of adapting to regulatory shifts. The discussion also highlights the complexities of international coverage. Droit operates globally, serving clients across diverse jurisdictions, each with unique regulatory nuances. While the fundamental principles of compliance remain consistent, the company adapts its solutions to address specific regional requirements. We touch upon the varying speeds of regulatory implementation across different regions, particularly contrasting the US with other global markets, which creates additional challenges for clients. Looking ahead, client priorities and Droit's focus for 2025 emphasise proactive planning and adherence to evolving standards. Clients are urged to adopt best practices, particularly in areas like position reporting, exchange-traded derivative reporting, and transaction reporting. Droit aims to support these efforts by providing tools for pre-trade decision-making and ensuring regulatory transparency. The overarching message is the importance of leveraging technology and industry collaboration to navigate the ever-changing landscape of financial regulation.  

Highways Voices
How Cornwall's Cormac is Redefining Highway Maintenance – The Power of the Teckal Model on Highways Voices

Highways Voices

Play Episode Listen Later Apr 15, 2025 26:31


This week on Highways Voices, we examine the way councils can deliver their highway maintenance by using an arms-length wholly owned company.Our guest is Dominic Bostock, the Managing Director of Cornwall-based Cormac, a company operated under the Teckal approach, which is a procurement exemption that allows public authorities to contract directly with a legally separate entity that is owned and controlled by them, without needing to go through a full public procurement process.Subscribe to Highways Voices free on Apple Podcasts,Spotify, Amazon Music, Google Podcastsor Pocket Castsand never miss an episode!Adrian Tatum leads the conversation today, finding out how, for local and regional authorities under pressure to deliver more with less, the Teckal approach has empowered Cornwall to deliver top-quartile road conditions, grow resident satisfaction by 25%, and generate over £60 million in verified social value—without being a drain on resources.In todays podcast you will learn how an integrated highway and environmental services model drives both operational efficiency and community impact, and how a robust set of governance rules and legal structure was in place. You'll also hear how Cormac can support innovation, training, and strategic regional growth, and its strategies to improve road longevity, reduce carbon, and nurture local talent in a constrained funding environment.Hit play now to find out how you could benefit from some successful ideas.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Adepting To Change
Episode #22 – Adept's A Grade Awards

Adepting To Change

Play Episode Listen Later Apr 11, 2025 20:38


Alex is joined in the studio by our Head office team of Anna Howard, and Chris Holt. In this episode we recount our recent Annual A Grade Awards, give you the run down of the winners and what we have in store already for next years awards! We also take a look at whats coming up socially for Adept over the next few months   Adept - https://www.adeptcorporateservices.co.uk/ Adepts Linkedin - https://www.linkedin.com/company/adept-corporate-services   Anna's linkedin - https://www.linkedin.com/in/anna-howard-b15859316/ Chris's Linkedin - https://www.linkedin.com/in/christopher-holt-8109a912b/ Alex's Linkedin - https://www.linkedin.com/in/alex-mcmahon-bb107877/

Highways Voices
Changing mobility options using a driverless minibus - Professor Phil Blythe explains

Highways Voices

Play Episode Listen Later Apr 8, 2025 19:41


How are we getting on introducing driverless vehicles into our transport network?Well, we're further forward thanks to some new research in Sunderland which we find out about today with Newcastle University's Professor Phil Blythe who tells us about a trial service between a transport interchange and the city's hospital.Subscribe to Highways Voices free on Apple Podcasts,Spotify, Amazon Music, Google Podcastsor Pocket Castsand never miss an episode!In this episode of Highways Voices, Professor Blythe discusses how the real-world trial is not only mapping out the transformation of mobility access for patients and staff but also addressing the urgent shortage of commercial drivers and paving the way for smarter, more connected urban transport systems. For decision-makers grappling with service gaps, labour challenges, and climate goals, this is a look into a near-future solution already in motion.In this episode you will learn how autonomous public transport is being used to solve real, local accessibility challenges in complex traffic environments, discover how Newcastle University is evaluating user trust, environmental benefits, and cost efficiency to guide future adoption and get insights on scalable models and how public-private partnerships are accelerating deployment of smart transport technologies across the UK.Hit play now to find out how autonomous vehicles are quietly reshaping the future of urban transport—starting with a hill, a hospital, and a city determined to lead.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Adepting To Change
Episode #23 – Autism Awareness

Adepting To Change

Play Episode Listen Later Apr 2, 2025 33:34


Alex is joined in the studio by our Head office team of Anna Howard, and Chris Holt. They are themselves in toe with their Children: Jacob Howard & Harvey Holt. In this episode, Chris & Anna speak candidly about their respective journeys of Autism with their sons. From spotting the early signs, through diagnosis and what teenage into adult life looks like. We also delve into what its like to manage family life & working life whilst having a child with Autism – a subject still to this day which can be quite Taboo!   Adept - https://www.adeptcorporateservices.co.uk/ Adepts Linkedin - https://www.linkedin.com/company/adept-corporate-services Adept - https://www.adeptcorporateservices.co.uk/ Anna's linkedin - https://www.linkedin.com/in/anna-howard-b15859316/   Chris's Linkedin - https://www.linkedin.com/in/christopher-holt-8109a912b/ Alex's Linkedin - https://www.linkedin.com/in/alex-mcmahon-bb107877/

Highways Voices
Net zero by the roadside: How Live Labs 2 is helping cut highways emissions

Highways Voices

Play Episode Listen Later Apr 1, 2025 24:01


How can cutting grass on the roadside help power the very vehicles that maintain our highways, while also slashing carbon emissions?Across the UK, local highways authorities are facing mounting pressure to decarbonise without sacrificing safety, budget, or reliability. The Live Labs 2 project has been working on seven real-world trials shaping the roads of tomorrow — from street lighting alternatives to circular biofuel solutions — helping turn innovation into business as usual.Subscribe to Highways Voices free on Apple Podcasts, Spotify, Amazon Music, Google Podcasts or Pocket Casts and never miss an episode!Thanks to our guest, Programme Director Giles Perkins, in today's podcast you will, among other things:Discover how data-driven street lighting strategies are reducing carbon while improving safety in the East Riding of YorkshireLearn how the UK's new Centre of Excellence is ranking carbon-saving innovations for scalable impact across regionsUncover how behavioural insights and collaborative systems-thinking are breaking down procurement and legislative barriers to accelerate Net Zero.Hit play now to hear how Live Labs 2 is creating a blueprint for future-ready roads — and how your organisation can join the charge.Highways Voices is brought to you with our partners the Transport Technology Forum, LCRIG, ADEPT and ITS UK.

Mob Rules Mobcast
Only Hands ! Adept a Can't / Grim After Dark

Mob Rules Mobcast

Play Episode Listen Later Mar 31, 2025 77:20


podcast #warhammer #warhammer40k #warhammercommunity #gamesworkshop https://warhammertv.com/player/25504/stream?assetType=episodes&playlist_id=5 New episodes LIVE every Tuesday at 10PM EST / 7PM PST! Also available as a podcast wherever you get yours. www.grimafterdark.com for links to all our stuff! Hosted by: Jon Quennell, Danny McDevitt and Val Heffelfinger Produced by: Tech Priest Dickie Executive Producer: Nick Horton

Betreutes Fühlen
Das Gute sehen - eine neue Therapie?

Betreutes Fühlen

Play Episode Listen Later Mar 17, 2025 62:36


Die Abwesenheit von Unglück und Katastrophen bedeutet nicht automatisch Glück und Zufriedenheit. Das kommt langsam auch in der Psychotherapie an, die sich in der Vergangenheit vor allem mit Krankheitsbildern beschäftigt und daher eher das Ziel hat, das Unglück und die Katastrophen zu beseitigen. Aber wäre es nicht schön, wenn wir wieder glücklich sein könnten? Leon und Atze beschäftigen sich dieses Mal mit Therapieansätzen, die positive Gefühle im Fokus haben und was wir für unseren Alltag daraus mitnehmen können. Fühlt euch gut betreut Leon & Atze Start ins heutige Thema: 12:07 min. VVK Münster 2025: https://betreutes-fuehlen.ticket.io/ Instagram: https://www.instagram.com/leonwindscheid/ https://www.instagram.com/atzeschroeder_offiziell/ Der Instagram Account für Betreutes Fühlen: https://www.instagram.com/betreutesfuehlen/ Mehr zu unseren Werbepartnern findet ihr hier: https://linktr.ee/betreutesfuehlen Tickets: Atze: https://www.atzeschroeder.de/#termine Leon: https://leonwindscheid.de/tour/ Quellen: https://www.scientificamerican.com/article/new-psychotherapies-that-focus-on-positive-experiences-could-better-treat/ Studie zu PAT: Craske, M. G., Meuret, A. E., Echiverri-Cohen, A., Rosenfield, D., & Ritz, T. (2023). Positive affect treatment targets reward sensitivity: A randomized controlled trial. Journal of Consulting and Clinical Psychology. https://pmc.ncbi.nlm.nih.gov/articles/PMC10213148/ Studie zu ADepT: Dunn, B. D., Widnall, E., Warbrick, L., Warner, F., Reed, N., Price, A., ... & Kuyken, W. (2023). Preliminary clinical and cost effectiveness of augmented depression therapy versus cognitive behavioural therapy for the treatment of anhedonic depression (ADepT): a single-centre, open-label, parallel-group, pilot, randomised, controlled trial. EClinicalMedicine,. https://www.thelancet.com/pdfs/journals/eclinm/PIIS2589-5370(23)00261-4.pdf Studie zu SkillJoy: LaFreniere, L. S., & Newman, M. G. (2023). Reducing contrast avoidance in GAD by savoring positive emotions: Outcome and mediation in a randomized controlled trial. Journal of Anxiety Disorders. https://pmc.ncbi.nlm.nih.gov/articles/PMC9976801/ Redaktion: Andy Hartard Produktion: Murmel Productions

Unsupervised Learning
Ep 55: Head of Amazon AGI Lab David Luan on DeepSeek's Significance, What's Next for Agents & Lessons from OpenAI

Unsupervised Learning

Play Episode Listen Later Feb 19, 2025 43:49


David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon's SF AGI Lab. In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he's looking for and more.[0:00] Intro[1:14] DeepSeek Reactions and Market Implications[2:44] AI Models and Efficiency[4:11] Challenges in Building AGI[7:58] Research Problems in AI Development[11:17] The Future of AI Agents[15:12] Engineering Challenges and Innovations[19:45] The Path to Reliable AI Agents[21:48] Defining AGI and Its Impact[22:47] Challenges and Gating Factors[24:05] Future Human-Computer Interaction[25:00] Specialized Models and Policy[25:58] Technical Challenges and Model Evaluation[28:36] Amazon's Role in AGI Development[30:33] Data Labeling and Team Building[36:37] Reflections on OpenAI[42:12] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

The Engineering Leadership Podcast
Upskilling: practices to improve hiring, storytelling & structuring personal/leadership problems with engineering principles #208

The Engineering Leadership Podcast

Play Episode Listen Later Feb 18, 2025 43:39


ABOUT COLLEEN TARTOWColleen Tartow, Ph.D. is Field CTO and Head of Strategy at VAST Data and has 20+ years of experience in data, analytics, engineering, and consulting. Adept at assisting organizations in deriving value from a data-driven culture, she has successfully led diverse data, engineering, and analytics teams through the development of complex global data management solutions and architecting enterprise data systems. Her demonstrated excellence in data, engineering, analytics, and diversity leadership makes her a trusted senior advisor among executives. An experienced speaker, author, valued mentor and startup advisor, Colleen holds degrees in astrophysics and lives in Massachusetts.ABOUT JIM LIUJim Liu is an accomplished engineering leader with a track record of driving business outcomes at companies like StockX and Nordstrom. He is also an active community builder with Engineering Leader Community and Angel Investor communities. Jim and his family reside in Seattle, WA.ABOUT DIVYA ALAVARTHIDivya Alavarthi is an experienced engineering and business leader with 14+ years of expertise in architecture, engineering, product delivery, pre-sales, professional services, and organizational leadership. She developed Salesforce Platform architecture standards, best practices, and minimal viable architectures. She supported a talent pool of 5000+ architects and developers resulting in improved strategic agility, speed to market, and business value in large-scale multi-cloud implementations.This episode is brought to you by Clipboard HealthClipboard Health is looking for the next generation of exceptional software engineering leaders, not just managers. They're a profitable unicorn, backed by top-tier investors, and they take the craft of engineering management seriously.Clipboard Health matches highly qualified healthcare workers with nearby facilities to fulfill millions of shifts a year - revolutionizing healthcare staffing with a fast, flexible, and user-friendly platform.Learn more & browse their open roles at clipboardhealth.com/engineeringSHOW NOTES:The importance of leadership in hiring (1:29)The Tartow Method Explained: Key aspects of a successful hiring practice (3:57)How to build out the interview process & ask the right questions (6:14)Behavioral Interviews and good responses: Tips for gaining clarity from interviewees on abstract skills (7:52)Where eng leaders can start building their hiring skill set (9:16)Colleen's experience co-leading ELC Boston & advice for 1st time event attendees (10:36)Understanding how to model problems as engineering challenges (14:41)How to use an engineering mindset to tackle personal problems (16:35)Jim's process for deconstructing problems & solving them like an engineer (18:38)Tips for building / applying your skill set around abstracting problems (21:27)Jim's perspective on getting involved with a local ELC community (24:36)Ways to help make the most out of your first ELC local experience (27:05)Divya shares about the power of storytelling in engineering leadership (30:07)Build the narrative about your product's business impact (32:24)An example of bringing different demos & storytelling together (34:09)Frameworks for effective storytelling: build a narrative around a product / demo (36:16)How to start improving your storytelling today (37:35)Divya's favorite moments with the ELC Seattle chapter & how to get involved (39:42)LINKS AND RESOURCESCheck out all of our local chapters & get involved here: elc.community/home/clubsThis episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/

iSport podcast
Fight! | Monster do Edenu! „Proč ne,“ říká. Muradov by mohl na Kincla, Humburger je adept do UFC

iSport podcast

Play Episode Listen Later Feb 4, 2025 39:28


Ve druhém ze tří dílů speciálu podcastu Fight! s Petrem Knížetem jsme se podívali pod pokličku jeho Monster Gymu, jehož tváří je i Mach Muradov. Jaký mají vztah a kdy se poznali? Uzbeckého zápasníka si Kníže vzal hned od začátku parády. „Nejtěžší sparing v mém životě,“ vzpomíná Muradov. „UFC byla největší meta,“ bilancuje Kníže nad spoluprácí s Machem. Návrat do zámoří v současnosti není téma. V rámci Oktagonu si dokáže představit i jeho souboj s Patrikem Kinclem. Do nejslavnější soutěže světa by podle něj mohl nahlédnout v budoucnu Dominik Humburger.

The Letters Page
Editor's Note #87

The Letters Page

Play Episode Listen Later Jan 21, 2025 92:52


We were live for the first time this year! Show Notes: Run Time: 1:32:51 Lots of goofs, silliness, mistakes, apologies, answers, justifications, and ridiculous listener interactions (encompassing interactions that are ridiculous AND listeners who are ridiculous — you know who you are), all of which means: it's another Editor's Note! Thank you all for being a part of this. We also announced the upcoming schedule:  Tuesday, February 4th: Episode #308 - Writers' Room: K.N.Y.F.E. can't just punch her way out of a situation Tuesday, February 11th: Episode #309 - Writers' Room: Disparation: a modern hero in the Golden Age Tuesday, February 18th: Editor's Note #88 Tuesday, February 25th: Episode #310 - Creative Process: Other Scions of OblivAeon Also, we had another installment of the Yarniverse from our industrious listener Kyrie Wynne! Featuring... Vis-yarn-ary! Craft-ain Cosmic! The Darn-gent Adept! and Weave! We love them. Join us next week for yet another Creative Process episode! About a distant future limited series? What could that possibly mean?! You'll just have to listen to find out!

Earthdawn Survival Guide
EDSG Episode 243 - Tribes of Cara Fahd: The Thunderers

Earthdawn Survival Guide

Play Episode Listen Later Jan 8, 2025 40:44


* Tribes of Cara Fahd: The Thunderers * Astrologically-inclined tribe. * Emerged from their kaer during a storm. * Adapted and changed from raiding to trade and mercenary service. * More spiritual bent; influence from their shared experience. * Are not family-exclusive. Open to other orks joining them. * Significant portion of their tribe are cavalry. * Overview of Thunderer training. * Forces organized around "cerri" -- small, tight-knit groups. * Adept cerri frequently swear oaths and form group patterns. * Elite forces, but not egotistical about it. Much respect from other tribes. * Very organized, routine-oriented tribe. * Feelings of obsidiman or troll spirituality to the Thunderers. * "Strong, silent types." * Questors are rare; they honor them all (thirteen) in turn. * Association with Cara Fahd * Waited for sign to join Krathis Gron... and got it. * Have adapted again, settling in to Cara Fahd as their home. * Current chief is Krathis Gron's primary military advisor. * True believers, but not as dramatic about it. * Speculation on the Thunderers' role in the Second Theran War. * Rebuilding the ruins of an old Cara Fahd city - New Revalk * Influential members of the Thunderers. * Clever twist/deconstruction on the stereotypical ork tribe. Find and Follow: Email: edsgpodcast@gmail.com YouTube: https://www.youtube.com/@EDSGPodcast Find and follow Josh: https://linktr.ee/LoreMerchant Get product information, developer blogs, and more at www.fasagames.com FASA Games on Facebook: https://www.facebook.com/fasagamesinc Official Earthdawn Facebook Group: https://www.facebook.com/groups/officialearthdawn FASA Games Discord Channel: https://discord.gg/uuVwS9u Earthdawn West Marches: https://discord.gg/hhHDtXW

The Doctor Who Podcast
The Echo Chamber #10 – Winter for the Adept

The Doctor Who Podcast

Play Episode Listen Later Jan 8, 2025 29:56


The DWP Echo Chamber returns with the final episode of Series Two! In Episode 10, James and Michele talk about Winter for the Adept by Andrew Cartmel, released in July 2000 - the tenth story in Big Finish's main range. Listening instructions are very straightforward - Buy Winter for the Adept from Big Finish Productions for just a few pounds (if you don't own it already of course!) Listen to it! Join us in the Echo Chamber, listen to what we thought and join in the discussion! The Echo Chamber will be back with Series 3 later in 2025. Enjoy the show!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4

god ceo new york amazon spotify time world europe ai google china apple vision pr voice future speaking san francisco new york times phd video thinking chinese simple data predictions elon musk iphone surprise impact legal code chatgpt tesla reflecting memory ga discord busy reddit lgbt cloud flash stem honestly ab pros jeff bezos windows excited researchers unicorns lower ip tackling sort survey insane tier cto vc whispers applications doc signing seal fireworks f1 genie academic openai gemini sf organizing nvidia ux api assembly davos frontier chrome makes scarlett johansson ui mm turbo gpt bash soda aws ml lama dropbox mosaic creative writing github drafting reinvent canvas 1b apis bolt lava ruler exact stripe wwdc pico dev strawberry hundred vm sander vcs bt flux taiwanese 200k arr moto opus gartner assumption sora google docs nemo parting sam altman llm blackwell google drive sombra gpu opa tbd ramp 3b elia elo agi 5b gnome estimates midjourney bytedance leopold dota ciso haiku dx sarah silverman coursera rag gpus sonnets george rr martin cypher quill getty cobalt perplexity sdks grok deepmind ilya anthropic noam sheesh v2 future trends ttc alessio lms satya r1 ssi stack overflow 8b rl emerging trends itc veo theoretically sota vo2 replicate suno yi mistral black forest inflection xai graphql aitor brain trust databricks gpts chinchillas adept mcp nosql jensen huang ai models grand central grand central station hacker news zep hacken ethical issues cosign claud ai news gpc distro lubna autogpt neo4j tpu o3 jeremy howard gbt o1 quent gpd heygen 70b exa gradients langchain loras minimax 400b neurips jeff dean 128k elos gemini pro cerebras code interpreter ai winter icml john franco lstm r1s aws reinvent muser latent space pypy dan gross nova pro paige bailey noam brown quiet capital john frankel
Leaders In Tech
Never Turn Down an Opportunity: Leadership Lessons from Reshma Ombase

Leaders In Tech

Play Episode Listen Later Dec 27, 2024 38:25


Welcome to the latest episode of the Leaders in Tech podcast! In this episode, host David Manel interviews Reshma Ombase, a Senior Director in Technology Product Management at Optum, part of UnitedHealth Group. Reshma shares her journey of resilience, adaptability, and leadership—an inspiring story for anyone striving to make an impact in the tech industry. Her story is one of perseverance, seizing opportunities, and breaking barriers. Here's a breakdown of the key takeaways from the conversation.Here's more about Reshma Ombase:“I'm an experienced leader and strategist with a proven track record of successfully managing teams and product solutions that have delivered exceptional results. I currently oversee the Directed Spend Product portfolio for the Optum Financial Consumer Payments business and have the privilege to lead a group of talented people, who strive to build consumer centric products, capabilities, and experiences.I'm expert in assessing market opportunities, competitor analysis, drafting product vision/roadmap, system analysis, setting up AGILE development processes, product management, portfolio management, business process engineering, and software development life cycle management. Including system design, development, testing and implementation of software applications. Adept at building long range plans, tracking plan to budget, delivery, and measuring ROI/NPS.I've worn many hats in my career -- programmer, business analyst, system analyst, SME, agile product owner, product manager, portfolio manager, team lead, and business technology platform lead! Which has helped develop my ability to manage a vast and complex business/product/technology portfolio.To me commitment is crucial and execution is key in everything I do!”Company description: Optum is a health services and innovation company on a mission to help people live healthier lives and to help make the health system work better for everyone.

TheOccultRejects
Rosicrucian Adept Alios Mailander

TheOccultRejects

Play Episode Listen Later Dec 18, 2024 72:27


Links For The Occult Rejects and The Spiritual Gangsters https://linktr.ee/occultrejectsandfriendsOccult Research Institutehttps://www.occultresearchinstitute.org/Links For The Spiritual Gangstershttps://linktr.ee/thespiritualgangsterspodcastCash Apphttps://cash.app/$theoccultrejectsVenmo@TheOccultRejectsBuy Me A Coffeebuymeacoffee.com/TheOccultRejectsPatreonhttps://www.patreon.com/TheOccultRejectsAbcedarian Ma'athttps://linktr.ee/abcedarianhttps://www.patreon.com/posts/61141974

OnBoard!
EP 54. 深度对谈顶尖AI开源项目:大模型开源生态, Agent 与中国力量

OnBoard!

Play Episode Listen Later Dec 16, 2024 199:06


聊到生成式AI的发展,开源绝对是最关键的话题之一。这次的嘉宾,可以说涵盖了大模型开源领域最值得关注的公司,真的是黄金阵容! 首先跟大家汇报一下,上周日我们在北京举办的 OnBoard! 第一次线下听友会真是超预期!开放报名4天就250多人报名,周日从上午9点到下午3点,从机器人到AI,创业投资和软件出海,100人的场地,直到最后都几乎座无虚席!真的是非常感谢大家的支持~ Hello World, who is OnBoard!? 回到这一期播客,我们将深入探讨大模型的开源生态。在生成式AI飞速发展的一年多时间里,开源无疑是一个不可忽视的话题。开源模型的迅猛发展,从 Meta 的 Llama 3 到 Mistral 的最新模型,它们对闭源大模型如 GPT4 的追赶,不仅令人惊艳,更加速了 AI 场景下产品的实际应用。而围绕大模型的生态系统,从推理加速到开发工具,再到智能代理,技术栈的丰富程度,虽然已经孕育出了像 Langchain 这样的领军企业,但这一切似乎只是冰山一角。 特别值得一提的是,随着阿里千问系列、Deepseek、以及 Yi 等中国团队主导的模型在国际舞台上崭露头角,我们不禁思考,除了模仿和追赶,中国在大模型领域的发展是否还有更多值得我们关注和自豪的成就。 今天,Monica 有幸邀请到了几位极具代表性的重磅嘉宾,来自 Huggingface 的开源老兵,有通义千问 Qwen 的开源负责人(他也是 Agent 领域最受关注的项目 OpenDevin 核心成员),还有最具国际影响力的开源项目 vLLM 主导人。真是涵盖了大模型开源生态的各个领域的最一线视角! 嘉宾们都太宝藏了,我们的话题延伸到大模型的各个方面,录了近4个小时!我们前半部分聊了很多infra的创新,以及最近很火的、以OpenDevin 为代表的软件开发agent 背后的技术和生态等话题。下半部分,我们回到大模型开源的主题,畅谈了: 底层基础大模型的开源闭源生态,未来可能有怎样的演进? 开源模型商业化跟过去我们在大数据时代看到的databricks 之类开源商业模式有哪些异同? 如何做一个有国际影响力的开源项目? 嘉宾介绍 Tiezhen Wang, Huggingface 工程师,他可以说是中国与世界开源 AI 生态的桥梁,更是从 Google TensorFlow 时代到 Huggingface 早期员工,对中国和世界的开源 AI 生态都有极深的洞察。 Junyang Lin, 通义千问开源负责人,作为 Qwen 在全球开源社区的主要代言人,他不仅见证了开源的发展历程,还是目前备受瞩目的 Agent 开源项目 OpenDevin 的核心团队成员。 李卓翰,UC Berkeley PhD,他所主导的项目更是大名鼎鼎,就是已经成为行业标准的大模型推理框架 vLLM!他所在的 Sky Lab 被誉为开源基础设施的摇篮,从估值百亿美元的 Databricks 到 Anyscale(开源计算框架 Ray 的商业化公司)。他还深度参与了 Chat Arena, Vicuna 等多个国际知名开源项目,对大模型周边生态和 infra 的不仅有国际一线经验,更是有很多有技术理想的干货! OnBoard! 主持:Monica:美元VC投资人,前 AWS 硅谷团队+ AI 创业公司打工人,公众号M小姐研习录 (ID: MissMStudy) 主理人 | 即刻:莫妮卡同学 还有数据、评测等等大模型领域的核心话题,真的非常全面,又不失一线从业者的深度。索性就不分成两部分了,大家可以对着 show notes 里面的时间戳,直接跳转到你感兴趣的话题(虽然我觉得每个话题都很好!)介绍了这么多,还要声明一下,节目里面重点聊到的开源社区 Huggingface,还有几个开源的项目,包括阿里千问、OpenDevin, Deepseek, 零一万物的 Yi,vLLM 等,都没有收取任何广告,完全是嘉宾走心分享,全程无广! 当然,如果你们或者其他AI公司考虑赞助一下我们用爱发电的播客,我们当然也是欢迎的! 三小时硬核马拉松开始,enjoy! 嘉宾介绍 我们都聊了什么 05:28 嘉宾自我介绍,有意思的开源 AI 项目 18:37 vLLM 如何开始的,如何成为全球顶尖项目,为什么我们需要一个大模型推理框架? 30:24 Agent framework: OpenDevin 这样的负责 agent 会带来怎样的推理挑战? 40:37 做好一个编程 Agent,还需要哪些新的工具?多模态会带来怎样的变化? 56:16 我们需要怎样的 Agent Framework?为什么最适合开源社区来做?Framework 会收敛吗? 67:46 什么是 Crew AI? 如何看待 Multi-agent 架构? 73:11 借鉴前端框架的发展历史,如何理解一个框架如何成为行业标准? 77:54 Huggingface 上开源LLM现状,过去一年多有哪些重要进展?有哪些不同的开源方式?泽娜要给你看待一个开源模型的流行程度? 94:27 如何理解不同架构的开源大模型生态?Qwen 如何通过架构演进打造更好的开源生态? 104:59 中国的大模型开源项目有哪些创新?大模型架构有哪些变化? 112:17 为什么说新的模型架构可能会带来商业化的新机会?我们能从以前的开源商业化中学到什么? 119:22 我们看到现有大模型架构的天花板了吗?什么是一个新的架构? 128:03 Zhuohan 从参与最早的开源 LLM 之一 Vicuna 的经历学到什么?学术界和业界在大模型生态上如何分工? 140:48 用于大模型的数据集领域有哪些值得关注的进展? 149:42 Mistral 为什么这么快爆火?打造一流国际开源项目有什么可借鉴的经验?vLLM 有什么道和术上的心得? 166:13 Chatbot Arena 是如何开始的?为什么模型的评测那么重要?还有哪些挑战和可能的进展? 180:49 Zhuohan 对于 vLLM 商业化方式有什么思考?未来推理成本还有哪些下降空间? 188:17 快问快答:过去一年生成式AI发展有什么超出预期和不及预期的地方?未来还有什么值得期待? 我们提到的公司和重点名词 Qwen⁠, ⁠Qwen-2⁠ OpenDevin: ⁠opendevin.github.io⁠ vLLM: ⁠github.com⁠ ⁠Yi (Github)⁠, ⁠零一万物⁠ Chatbot Arena: ⁠huggingface.co⁠ AutoGPT: ⁠github.com⁠ crew AI: ⁠www.crewai.com⁠ autoAWQ: ⁠github.com⁠ LLM.c: ⁠github.com⁠ Flash attention: ⁠github.com⁠ Continuous batching:一种数据处理技术,用于将连续的数据流分批处理,以提高效率和可扩展性。 KV cache:键值对缓存,一种存储结构,通过键快速访问数据值,常用于提高数据检索速度。 Page attention:页面注意力机制,一种在处理长文本时,使模型集中注意力于当前页面或段落的技术。 Quantization:量化,将数据表示的精度降低到更少的比特数,以减少模型大小和提高计算效率。 ⁠Direct Preference Optimization (DPO)⁠: Your Language Model is Secretly a Reward Model Google Gemini: ⁠deepmind.google⁠ Adept: ⁠www.adept.ai⁠ MetaGPT: ⁠github.com⁠ ⁠Dolphin⁠an open-source and uncensored, and commercially licensed dataset and series of instruct-tuned language models based on Microsoft's Orca paper Common crawl: ⁠commoncrawl.org⁠ Tiezhen 的报告:⁠Booming Open Source Chinese-Speaking LLMs: A Closer Look⁠, ⁠Slides⁠ ⁠通义千问一周年,开源狂飙路上的抉择与思考|魔搭深度访谈⁠ ⁠阿里林俊旸:大模型对很多人来说不够用,打造多模态Agent是关键 | 中国AIGC产业峰会⁠ 欢迎关注M小姐的微信公众号,了解更多中美软件、AI与创业投资的干货内容! M小姐研习录 (ID: MissMStudy) 喜欢 OnBoard! 的话,也可以点击打赏,请我们喝一杯咖啡!如果你用 Apple Podcasts 收听,也请给我们一个五星好评,这对我们非常重要。 最后!快来加入Onboard!听友群,结识到高质量的听友们,我们还会组织线下主题聚会,开放实时旁听播客录制,嘉宾互动等新的尝试。添加任意一位小助手微信,onboard666, 或者 Nine_tunes,小助手会拉你进群。期待你来!

For the Love of Yoga with Nish the Fish
How To Become A Bhairavi, A Tantrik Adept

For the Love of Yoga with Nish the Fish

Play Episode Listen Later Dec 14, 2024 92:28


On this auspicious Mrgaśīrsha Pūrnimā (full moon) to which is ascribed Bhairavi Jayanti, Annapurna Jayanti and also Dattatreya Jayanti, we decide to say a few words about just who Mā Bhairavi might be. She is one of the most mysterious of the mahāvidyās (the ten terrifying forms of Mā). In this video, referring to the meditation mantra below, I make the case that Bhairavi is a euhemerized Tantrik adept! We of course discuss Bhairavi Brāhmani, Sri Ramakrishna's Tantrik guru and we spend the first half of the talk discussing what makes Tantra, Tantra in order to clearly indicate the path upon which we ourselves must trod to become a Bhairavi/Bhairava! उद्यद्भानुसहस्रकान्तिमरुणक्षौमां शिरोमालिकांरक्तालिप्तपयोधरां जपवटीं विद्यामभीतिं वरम् ।हस्ताब्जैदधतीं त्रिनेत्रविलसद्रक्तारविन्दश्रियंदेवीं बद्धहिमांशुरक्तमुकुटां वन्दे समन्दस्मिताम् ॥udyad-bhānu-sahasra-kāntim-aruṇa-kṣaumāṃ śiro-mālikāṃraktā-lipta-payodharāṃ japa-vaṭīṃ vidyām-abhītiṃ varam .hastābjaidadhatīṃ trinetra-vilasad-raktāravinda-śriyaṃdevīṃ baddha-himāṃśu-rakta-mukuṭāṃ vande samandasmitāmRadiant like the splendour of a thousand suns, clad in red garments, garlanded in headsBreasts smeared with blood, holding a rosary and a book, assuring fearlessness and granting boons With her lotus like hands, Her third eye shining with the beauty of blood-red lotus flowers,She is the Goddess who wears a red crown in which is tucked the moon- I worship Her who is smiling gently!For more detailed instructions for how to perform Kālī pūjā, watch this playlist: https://www.patreon.com/collection/233799Lectures happen live every Monday at 7pm PST and Friday 10am PST and again Friday at 6pm PST.Use this link and I will see you there:https://www.zoom.us/j/7028380815For more videos, guided meditations and instruction and for access to our lecture library, visit me at:https://www.patreon.com/yogawithnishTo get in on the discussion and access various spiritual materials, join our Discord here: https://discord.gg/U8zKP8yMrMSupport the show

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The new Claude 3.5 Sonnet, Computer Use, and Building SOTA Agents — with Erik Schluntz, Anthropic

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Nov 28, 2024 71:10


We have announced our first speaker, friend of the show Dylan Patel, and topic slates for Latent Space LIVE! at NeurIPS. Sign up for IRL/Livestream and to debate!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!The vibe shift we observed in July - in favor of Claude 3.5 Sonnet, first introduced in June — has been remarkably long lived and persistent, surviving multiple subsequent updates of 4o, o1 and Gemini versions, for Anthropic's Claude to end 2024 as the preferred model for AI Engineers and even being the exclusive choice for new code agents like bolt.new (our next guest on the pod!), which unlocked so much performance from Claude Sonnet that it went from $0 to $4m ARR in 4 weeks when it launched last month.Anthropic has now raised an additional $4b from Amazon and made an incredibly well received update of Claude 3.5 Sonnet (and Haiku), making significant improvements in performance over its predecessors:Solving SWE-BenchAs part of the October Sonnet release, Anthropic teased a blink-and-you'll miss it result:The updated Claude 3.5 Sonnet shows wide-ranging improvements on industry benchmarks, with particularly strong gains in agentic coding and tool use tasks. On coding, it improves performance on SWE-bench Verified from 33.4% to 49.0%, scoring higher than all publicly available models—including reasoning models like OpenAI o1-preview and specialized systems designed for agentic coding. It also improves performance on TAU-bench, an agentic tool use task, from 62.6% to 69.2% in the retail domain, and from 36.0% to 46.0% in the more challenging airline domain. The new Claude 3.5 Sonnet offers these advancements at the same price and speed as its predecessor.This was followed up by a blogpost a week later from today's guest, Erik Schluntz, the engineer who implemented and scored this SOTA result using a simple, non-overengineered version of the SWE-Agent framework (you can see the submissions here). We have previously covered the SWE-Bench story extensively:* Speaking with SWEBench/SWEAgent authors at ICLR* Speaking with Cosine Genie, the previous SOTA (43.8%) on SWEBench Verified (with brief update at DevDay 2024)* Speaking with Shunyu Yao on SWEBench and the ReAct paradigm driving SWE-AgentOne of the notable inclusions in this blogpost are the tools that Erik decided to give Claude, e.g. the “Edit Tool”:The tools teased in the SWEBench submission/blogpost were then polished up and released with Computer Use…And you can also see even more computer use tools given in the new Model Context Protocol servers:Claude Computer UseBecause it is one of the best received AI releases of the year, we recommend watching the 2 minute Computer Use intro (and related demos) in its entirety:Eric also worked on Claude's function calling, tool use, and computer use APIs, so we discuss that in the episode.Erik [00:53:39]: With computer use, just give the thing a browser that's logged into what you want to integrate with, and it's going to work immediately. And I see that reduction in friction as being incredibly exciting. Imagine a customer support team where, okay, hey, you got this customer support bot, but you need to go integrate it with all these things. And you don't have any engineers on your customer support team. But if you can just give the thing a browser that's logged into your systems that you need it to have access to, now, suddenly, in one day, you could be up and rolling with a fully integrated customer service bot that could go do all the actions you care about. So I think that's the most exciting thing for me about computer use, is reducing that friction of integrations to almost zero.As you'll see, this is very top of mind for Erik as a former Robotics founder who's company basically used robots to interface with human physical systems like elevators.Full Video episodePlease like and subscribe!Show Notes* Eric Schluntz* “Raising the bar on SWE-Bench Verified”* Cobalt Robotics* SWE-Bench* SWE-Bench Verified* Human Eval & other benchmarks* Anthropic Workbench* Aider* Cursor* Fireworks AI* E2B* Amanda Askell* Toyota Research* Physical Intelligence (Pi)* Chelsea Finn* Josh Albrecht* Eric Jang* 1X* Dust* Cosine Episode* Bolt* Adept Episode* TauBench* LMSys EpisodeTimestamps* [00:00:00] Introductions* [00:03:39] What is SWE-Bench?* [00:12:22] SWE-Bench vs HumanEval vs others* [00:15:21] SWE-Agent architecture and runtime* [00:21:18] Do you need code indexing?* [00:24:50] Giving the agent tools* [00:27:47] Sandboxing for coding agents* [00:29:16] Why not write tests?* [00:30:31] Redesigning engineering tools for LLMs* [00:35:53] Multi-agent systems* [00:37:52] Why XML so good?* [00:42:57] Thoughts on agent frameworks* [00:45:12] How many turns can an agent do?* [00:47:12] Using multiple model types* [00:51:40] Computer use and agent use cases* [00:59:04] State of AI robotics* [01:04:24] Robotics in manufacturing* [01:05:01] Hardware challenges in robotics* [01:09:21] Is self-driving a good business?TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners. And today we're in the new studio with my usual co-host, Shawn from Smol AI.Swyx [00:00:14]: Hey, and today we're very blessed to have Erik Schluntz from Anthropic with us. Welcome.Erik [00:00:19]: Hi, thanks very much. I'm Erik Schluntz. I'm a member of technical staff at Anthropic, working on tool use, computer use, and Swebench.Swyx [00:00:27]: Yeah. Well, how did you get into just the whole AI journey? I think you spent some time at SpaceX as well? Yeah. And robotics. Yeah. There's a lot of overlap between like the robotics people and the AI people, and maybe like there's some interlap or interest between language models for robots right now. Maybe just a little bit of background on how you got to where you are. Yeah, sure.Erik [00:00:50]: I was at SpaceX a long time ago, but before joining Anthropic, I was the CTO and co-founder of Cobalt Robotics. We built security and inspection robots. These are sort of five foot tall robots that would patrol through an office building or a warehouse looking for anything out of the ordinary. Very friendly, no tasers or anything. We would just sort of call a remote operator if we saw anything. We have about 100 of those out in the world, and had a team of about 100. We actually got acquired about six months ago, but I had left Cobalt about a year ago now, because I was starting to get a lot more excited about AI. I had been writing a lot of my code with things like Copilot, and I was like, wow, this is actually really cool. If you had told me 10 years ago that AI would be writing a lot of my code, I would say, hey, I think that's AGI. And so I kind of realized that we had passed this level, like, wow, this is actually really useful for engineering work. That got me a lot more excited about AI and learning about large language models. So I ended up taking a sabbatical and then doing a lot of reading and research myself and decided, hey, I want to go be at the core of this and joined Anthropic.Alessio [00:01:53]: And why Anthropic? Did you consider other labs? Did you consider maybe some of the robotics companies?Erik [00:02:00]: So I think at the time I was a little burnt out of robotics, and so also for the rest of this, any sort of negative things I say about robotics or hardware is coming from a place of burnout, and I reserve my right to change my opinion in a few years. Yeah, I looked around, but ultimately I knew a lot of people that I really trusted and I thought were incredibly smart at Anthropic, and I think that was the big deciding factor to come there. I was like, hey, this team's amazing. They're not just brilliant, but sort of like the most nice and kind people that I know, and so I just felt like I could be a really good culture fit. And ultimately, I do care a lot about AI safety and making sure that I don't want to build something that's used for bad purposes, and I felt like the best chance of that was joining Anthropic.Alessio [00:02:39]: And from the outside, these labs kind of look like huge organizations that have these obscureSwyx [00:02:44]: ways to organize.Alessio [00:02:45]: How did you get, you joined Anthropic, did you already know you were going to work on of the stuff you publish or you kind of join and then you figure out where you land? I think people are always curious to learn more.Erik [00:02:57]: Yeah, I've been very happy that Anthropic is very bottoms up and sort of very sort of receptive to whatever your interests are. And so I joined sort of being very transparent of like, hey, I'm most excited about code generation and AI that can actually go out and sort of touch the world or sort of help people build things. And, you know, those weren't my initial initial projects. I also came in and said, hey, I want to do the most valuable possible thing for this company and help Anthropic succeed. And, you know, like, let me find the balance of those. So I was working on lots of things at the beginning, you know, function calling, tool use. And then sort of as it became more and more relevant, I was like, oh, hey, like, let's it's time to go work on encoding agents and sort of started looking at SWE-Bench as sort of a really good benchmark for that.Swyx [00:03:39]: So let's get right into SWE-Bench. That's one of the many claims to fame. I feel like there's just been a series of releases related with Cloud 3.5 Sonnet around about two or three months ago, 3.5 Sonnet came out and it was it was a step ahead in terms of a lot of people immediately fell in love with it for coding. And then last month you released a new updated version of Cloud Sonnet. We're not going to talk about the training for that because that's still confidential. But I think Anthropic's done a really good job, like applying the model to different things. So you took the lead on SWE-Bench, but then also we're going to talk a little bit about computer use later on. So maybe just give us a context about why you looked at SWE-Bench Verified and you actually came up with a whole system for building agents that would maximally use the model well. Yeah.Erik [00:04:28]: So I'm on a sub team called Product Research. And basically the idea of product research is to really understand what end customers care about and want in the models and then work to try to make that happen. So we're not focused on sort of these more abstract general benchmarks like math problems or MMLU, but we really care about finding the things that are really valuable and making sure the models are great at those. And so because I've been interested in coding agents, I knew that this would be a really valuable thing. And I knew there were a lot of startups and our customers trying to build coding agents with our models. And so I said, hey, this is going to be a really good benchmark to be able to measure that and do well on it. And I wasn't the first person at Anthropic to find SWE-Bench, and there are lots of people that already knew about it and had done some internal efforts on it. It fell to me to sort of both implement the benchmark, which is very tricky, and then also to sort of make sure we had an agent and basically like a reference agent, maybe I'd call it, that could do very well on it. Ultimately, we want to provide how we implemented that reference agent so that people can build their own agents on top of our system and get sort of the most out of it as possible. So with this blog post we released on SWE-Bench, we released the exact tools and the prompt that we gave the model to be able to do well.Swyx [00:05:46]: For people who don't know, who maybe haven't dived into SWE-Bench, I think the general perception is they're like tasks that a software engineer could do. I feel like that's an inaccurate description because it is basically, one, it's a subset of like 12 repos. It's everything they could find that every issue with like a matching commit that could be tested. So that's not every commit. And then SWE-Bench verified is further manually filtered by OpenAI. Is that an accurate description and anything you'd change about that? Yes.Erik [00:06:14]: SWE-Bench is, it certainly is a subset of all tasks. It's first of all, it's only Python repos, so already fairly limited there. And it's just 12 of these popular open source repos. And yes, it's only ones where there were tests that passed at the beginning and also new tests that were introduced that test the new feature that's added. So it is, I think, a very limited subset of real engineering tasks. But I think it's also very valuable because even though it's a subset, it is true engineering tasks. And I think a lot of other benchmarks are really kind of these much more artificial setups of even if they're related to coding, they're more like coding interview style questions or puzzles that I think are very different from day-to-day what you end up doing. I don't know how frequently you all get to use recursion in your day-to-day job, but whenever I do, it's like a treat. And I think it's almost comical, and a lot of people joke about this in the industry, is how different interview questions are.Swyx [00:07:13]: Dynamic programming. Yeah, exactly.Erik [00:07:15]: Like, you code. From the day-to-day job. But I think one of the most interesting things about SWE-Bench is that all these other benchmarks are usually just isolated puzzles, and you're starting from scratch. Whereas SWE-Bench, you're starting in the context of an entire repository. And so it adds this entirely new dimension to the problem of finding the relevant files. And this is a huge part of real engineering, is it's actually pretty rare that you're starting something totally greenfield. You need to go and figure out where in a codebase you're going to make a change and understand how your work is going to interact with the rest of the systems. And I think SWE-Bench does a really good job of presenting that problem.Alessio [00:07:51]: Why do we still use human eval? It's like 92%, I think. I don't even know if you can actually get to 100% because some of the data is not actuallySwyx [00:07:59]: solvable.Alessio [00:08:00]: Do you see benchmarks like that, they should just get sunsetted? Because when you look at the model releases, it's like, oh, it's like 92% instead of like 89%, 90% on human eval versus, you know, SWE-Bench verified is you have 49%, right? Which is like, before 45% was state of the art, but maybe like six months ago it was like 30%, something like that. So is that a benchmark that you think is going to replace human eval, or do you think they're just going to run in parallel?Erik [00:08:27]: I think there's still need for sort of many different varied evals. Like sometimes you do really care about just sort of greenfield code generation. And so I don't think that everything needs to go to sort of an agentic setup.Swyx [00:08:39]: It would be very expensive to implement.Erik [00:08:41]: The other thing I was going to say is that SWE-Bench is certainly hard to implement and expensive to run because each task, you have to parse, you know, a lot of the repo to understand where to put your code. And a lot of times you take many tries of writing code, running it, editing it. It can use a lot of tokens compared to something like human eval. So I think there's definitely a space for these more traditional coding evals that are sort of easy to implement, quick to run, and do get you some signal. Maybe hopefully there's just sort of harder versions of human eval that get created.Alessio [00:09:14]: How do we get SWE-Bench verified to 92%? Do you think that's something where it's like line of sight to it, or it's like, you know, we need a whole lot of things to go right? Yeah, yeah.Erik [00:09:23]: And actually, maybe I'll start with SWE-Bench versus SWE-Bench verified, which is I think something I missed earlier. So SWE-Bench is, as we described, this big set of tasks that were scraped.Swyx [00:09:33]: Like 12,000 or something?Erik [00:09:34]: Yeah, I think it's 2,000 in the final set. But a lot of those, even though a human did them, they're actually impossible given the information that comes with the task. The most classic example of this is the test looks for a very specific error string. You know, like assert message equals error, something, something, something. And unless you know that's exactly what you're looking for, there's no way the model is going to write that exact same error message, and so the tests are going to fail. So SWE-Bench verified was actually made in partnership with OpenAI, and they hired humans to go review all these tasks and pick out a subset to try to remove any obstacle like this that would make the tasks impossible. So in theory, all of these tasks should be fully doable by the model. And they also had humans grade how difficult they thought the problems would be. Between less than 15 minutes, I think 15 minutes to an hour, an hour to four hours, and greater than four hours. So that's kind of this interesting sort of how big the problem is as well. To get to SWE-Bench verified to 90%, actually, maybe I'll also start off with some of the remaining failures that I see when running our model on SWE-Bench. I'd say the biggest cases are the model sort of operates at the wrong level of abstraction. And what I mean by that is the model puts in maybe a smaller band-aid when really the task is asking for a bigger refactor. And some of those, you know, is the model's fault, but a lot of times if you're just sort of seeing the GitHub issue, it's not exactly clear which way you should do. So even though these tasks are possible, there's still some ambiguity in how the tasks are described. That being said, I think in general, language models frequently will produce a smaller diff when possible, rather than trying to do a big refactor. I think another area, at least the agent we created, didn't have any multimodal abilities, even though our models are very good at vision. So I think that's just a missed opportunity. And if I read through some of the traces, there's some funny things where, especially the tasks on matplotlib, which is a graphing library, the test script will save an image and the model will just say, okay, it looks great, you know, without looking at it. So there's certainly extra juice to squeeze there of just making sure the model really understands all the sides of the input that it's given, including multimodal. But yeah, I think like getting to 92%. So this is something that I have not looked at, but I'm very curious about. I want someone to look at, like, what is the union of all of the different tasks that have been solved by at least one attempt at SWE-Bench Verified. There's a ton of submissions to the benchmark, and so I'd be really curious to see how many of those 500 tasks at least someone has solved. And I think, you know, there's probably a bunch that none of the attempts have ever solved. And I think it'd be interesting to look at those and say, hey, is there some problem with these? Like, are these impossible? Or are they just really hard and only a human could do them?Swyx [00:12:22]: Yeah, like specifically, is there a category of problems that are still unreachable by any LLM agent? Yeah, yeah. And I think there definitely are.Erik [00:12:28]: The question is, are those fairly inaccessible or are they just impossible because of the descriptions? But I think certainly some of the tasks, especially the ones that the human graders reviewed as like taking longer than four hours are extremely difficult. I think we got a few of them right, but not very many at all in the benchmark.Swyx [00:12:49]: And did those take less than four hours?Erik [00:12:51]: They certainly did less than, yeah, than four hours.Swyx [00:12:54]: Is there a correlation of length of time with like human estimated time? You know what I mean? Or do we have sort of more of X paradox type situations where it's something super easy for a model, but hard for a human?Erik [00:13:06]: I actually haven't done the stats on that, but I think that'd be really interesting to see of like how many tokens does it take and how is that correlated with difficulty? What is the likelihood of success with difficulty? I think actually a really interesting thing that I saw, one of my coworkers who was also working on this named Simon, he was focusing just specifically on the very hard problems, the ones that are said to take longer than four hours. And he ended up sort of creating a much more detailed prompt than I used. And he got a higher score on the most difficult subset of problems, but a lower score overall on the whole benchmark. And the prompt that I made, which is sort of much more simple and bare bones, got a higher score on the overall benchmark, but lower score on the really hard problems. And I think some of that is the really detailed prompt made the model sort of overcomplicate a lot of the easy problems, because honestly, a lot of the suite bench problems, they really do just ask for a bandaid where it's like, hey, this crashes if this is none, and really all you need to do is put a check if none. And so sometimes trying to make the model think really deeply, it'll think in circles and overcomplicate something, which certainly human engineers are capable of as well. But I think there's some interesting thing of the best prompt for hard problems might not be the best prompt for easy problems.Alessio [00:14:19]: How do we fix that? Are you supposed to fix it at the model level? How do I know what prompt I'm supposed to use?Swyx [00:14:25]: Yeah.Erik [00:14:26]: And I'll say this was a very small effect size, and so I think this isn't worth obsessing over. I would say that as people are building systems around agents, I think the more you can separate out the different kinds of work the agent needs to do, the better you can tailor a prompt for that task. And I think that also creates a lot of like, for instance, if you were trying to make an agent that could both solve hard programming tasks, and it could just write quick test files for something that someone else had already made, the best way to do those two tasks might be very different prompts. I see a lot of people build systems where they first sort of have a classification, and then route the problem to two different prompts. And that's sort of a very effective thing, because one, it makes the two different prompts much simpler and smaller, and it means you can have someone work on one of the prompts without any risk of affecting the other tasks. So it creates like a nice separation of concerns. Yeah.Alessio [00:15:21]: And the other model behavior thing you mentioned, they prefer to generate like shorter diffs. Why is that? Like, is there a way? I think that's maybe like the lazy model question that people have is like, why are you not just generating the whole code instead of telling me to implement it?Swyx [00:15:36]: Are you saving tokens? Yeah, exactly. It's like conspiracy theory. Yeah. Yeah.Erik [00:15:41]: Yeah. So there's two different things there. One is like the, I'd say maybe like doing the easier solution rather than the hard solution. And I'd say the second one, I think what you're talking about is like the lazy model is like when the model says like dot, dot, dot, code remains the same.Swyx [00:15:52]: Code goes here. Yeah. I'm like, thanks, dude.Erik [00:15:55]: But honestly, like that just comes as like people on the internet will do stuff like that. And like, dude, if you're talking to a friend and you ask them like to give you some example code, they would definitely do that. They're not going to reroll the whole thing. And so I think that's just a matter of like, you know, sometimes you actually do just, just want like the relevant changes. And so I think it's, this is something where a lot of times like, you know, the models aren't good at mind reading of like which one you want. So I think that like the more explicit you can be in prompting to say, Hey, you know, give me the entire thing, no, no elisions versus just give me the relevant changes. And that's something, you know, we want to make the models always better at following those kinds of instructions.Swyx [00:16:32]: I'll drop a couple of references here. We're recording this like a day after Dario, Lex Friedman just dropped his five hour pod with Dario and Amanda and the rest of the crew. And Dario actually made this interesting observation that like, we actually don't want, we complain about models being too chatty in text and then not chatty enough in code. And so like getting that right is kind of a awkward bar because, you know, you, you don't want it to yap in its responses, but then you also want it to be complete in, in code. And then sometimes it's not complete. Sometimes you just want it to diff, which is something that Enthopic has also released with a, you know, like the, the fast edit stuff that you guys did. And then the other thing I wanted to also double back on is the prompting stuff. You said, you said it was a small effect, but it was a noticeable effect in terms of like picking a prompt. I think we'll go into suite agent in a little bit, but I kind of reject the fact that, you know, you need to choose one prompt and like have your whole performance be predicated on that one prompt. I think something that Enthopic has done really well is meta prompting, prompting for a prompt. And so why can't you just develop a meta prompt for, for all the other prompts? And you know, if it's a simple task, make a simple prompt, if it's a hard task, make a hard prompt. Obviously I'm probably hand-waving a little bit, but I will definitely ask people to try the Enthopic Workbench meta prompting system if they haven't tried it yet. I went to the Build Day recently at Enthopic HQ, and it's the closest I've felt to an AGI, like learning how to operate itself that, yeah, it's, it's, it's really magical.Erik [00:17:57]: Yeah, no, Claude is great at writing prompts for Claude.Swyx [00:18:00]: Right, so meta prompting. Yeah, yeah.Erik [00:18:02]: The way I think about this is that humans, even like very smart humans still use sort of checklists and use sort of scaffolding for themselves. Surgeons will still have checklists, even though they're incredible experts. And certainly, you know, a very senior engineer needs less structure than a junior engineer, but there still is some of that structure that you want to keep. And so I always try to anthropomorphize the models and try to think about for a human sort of what is the equivalent. And that's sort of, you know, how I think about these things is how much instruction would you give a human with the same task? And do you, would you need to give them a lot of instruction or a little bit of instruction?Alessio [00:18:36]: Let's talk about the agent architecture maybe. So first, runtime, you let it run until it thinks it's done or it reaches 200k context window.Swyx [00:18:45]: How did you come up? What's up with that?Erik [00:18:47]: Yeah.Swyx [00:18:48]: Yeah.Erik [00:18:49]: I mean, this, so I'd say that a lot of previous agent work built sort of these very hard coded and rigid workflows where the model is sort of pushed through certain flows of steps. And I think to some extent, you know, that's needed with smaller models and models that are less smart. But one of the things that we really wanted to explore was like, let's really give Claude the reins here and not force Claude to do anything, but let Claude decide, you know, how it should approach the problem, what steps it should do. And so really, you know, what we did is like the most extreme version of this is just give it some tools that it can call and it's able to keep calling the tools, keep thinking, and then yeah, keep doing that until it thinks it's done. And that's sort of the most, the most minimal agent framework that we came up with. And I think that works very well. I think especially the new Sonnet 3.5 is very, very good at self-correction, has a lot of like grit. Claude will try things that fail and then try, you know, come back and sort of try different approaches. And I think that's something that you didn't see in a lot of previous models. Some of the existing agent frameworks that I looked at, they had whole systems built to try to detect loops and see, oh, is the model doing the same thing, you know, more than three times, then we have to pull it out. And I think like the smarter the models are, the less you need that kind of extra scaffolding. So yeah, just giving the model tools and letting it keep sample and call tools until it thinks it's done was the most minimal framework that we could think of. And so that's what we did.Alessio [00:20:18]: So you're not pruning like bad paths from the context. If it tries to do something, it fails. You just burn all these tokens.Swyx [00:20:25]: Yes.Erik [00:20:26]: I would say the downside of this is that this is sort of a very token expensive way to doSwyx [00:20:29]: this. But still, it's very common to prune bad paths because models get stuck. Yeah.Erik [00:20:35]: But I'd say that, yeah, 3.5 is not getting stuck as much as previous models. And so, yeah, we wanted to at least just try the most minimal thing. Now, I would say that, you know, this is definitely an area of future research, especially if we talk about these problems that are going to take a human more than four hours. Those might be things where we're going to need to go prune bad paths to let the model be able to accomplish this task within 200k tokens. So certainly I think there's like future research to be done in that area, but it's not necessary to do well on these benchmarks.Swyx [00:21:06]: Another thing I always have questions about on context window things, there's a mini cottage industry of code indexers that have sprung up for large code bases, like the ones in SweetBench. You didn't need them? We didn't.Erik [00:21:18]: And I think I'd say there's like two reasons for this. One is like SweetBench specific and the other is a more general thing. The more general thing is that I think Sonnet is very good at what we call agentic search. And what this basically means is letting the model decide how to search for something. It gets the results and then it can decide, should it keep searching or is it done? Does it have everything it needs? So if you read through a lot of the traces of the SweetBench, the model is calling tools to view directories, list out things, view files. And it will do a few of those until it feels like it's found the file where the bug is. And then it will start working on that file. And I think like, again, this is all, everything we did was about just giving Claude the full reins. So there's no hard-coded system. There's no search system that you're relying on getting the correct files into context. This just totally lets Claude do it.Swyx [00:22:11]: Or embedding things into a vector database. Exactly. Oops. No, no.Erik [00:22:17]: This is very, very token expensive. And so certainly, and it also takes many, many turns. And so certainly if you want to do something in a single turn, you need to do RAG and just push stuff into the first prompt.Alessio [00:22:28]: And just to make it clear, it's using the Bash tool, basically doing LS, looking at files and then doing CAD for the following context. It can do that.Erik [00:22:35]: But it's file editing tool also has a command in it called view that can view a directory. It's very similar to LS, but it just sort of has some nice sort of quality of life improvements. So I think it'll only do an LS sort of two directories deep so that the model doesn't get overwhelmed if it does this on a huge file. I would say actually we did more engineering of the tools than the overall prompt. But the one other thing I want to say about this agentic search is that for SWE-Bench specifically, a lot of the tasks are bug reports, which means they have a stack trace in them. And that means right in that first prompt, it tells you where to go. And so I think this is a very easy case for the model to find the right files versus if you're using this as a general coding assistant where there isn't a stack trace or you're asking it to insert a new feature, I think there it's much harder to know which files to look at. And that might be an area where you would need to do more of this exhaustive search where an agentic search would take way too long.Swyx [00:23:33]: As someone who spent the last few years in the JS world, it'd be interesting to see SWE-Bench JS because these stack traces are useless because of so much virtualization that we do. So they're very, very disconnected with where the code problems are actually appearing.Erik [00:23:50]: That makes me feel better about my limited front-end experience, as I've always struggled with that problem.Swyx [00:23:55]: It's not your fault. We've gotten ourselves into a very, very complicated situation. And I'm not sure it's entirely needed. But if you talk to our friends at Vercel, they will say it is.Erik [00:24:04]: I will say SWE-Bench just released SWE-Bench Multimodal, which I believe is either entirely JavaScript or largely JavaScript. And it's entirely things that have visual components of them.Swyx [00:24:15]: Are you going to tackle that? We will see.Erik [00:24:17]: I think it's on the list and there's interest, but no guarantees yet.Swyx [00:24:20]: Just as a side note, it occurs to me that every model lab, including Enthopic, but the others as well, you should have your own SWE-Bench, whatever your bug tracker tool. This is a general methodology that you can use to track progress, I guess.Erik [00:24:34]: Yeah, sort of running on our own internal code base.Swyx [00:24:36]: Yeah, that's a fun idea.Alessio [00:24:37]: Since you spend so much time on the tool design, so you have this edit tool that can make changes and whatnot. Any learnings from that that you wish the AI IDEs would take in? Is there some special way to look at files, feed them in?Erik [00:24:50]: I would say the core of that tool is string replace. And so we did a few different experiments with different ways to specify how to edit a file. And string replace, basically, the model has to write out the existing version of the string and then a new version, and that just gets swapped in. We found that to be the most reliable way to do these edits. Other things that we tried were having the model directly write a diff, having the model fully regenerate files. That one is actually the most accurate, but it takes so many tokens, and if you're in a very big file, it's cost prohibitive. There's basically a lot of different ways to represent the same task. And they actually have pretty big differences in terms of model accuracy. I think Eider, they have a really good blog where they explore some of these different methods for editing files, and they post results about them, which I think is interesting. But I think this is a really good example of the broader idea that you need to iterate on tools rather than just a prompt. And I think a lot of people, when they make tools for an LLM, they kind of treat it like they're just writing an API for a computer, and it's sort of very minimal. It's sort of just the bare bones of what you'd need, and honestly, it's so hard for the models to use those. Again, I come back to anthropomorphizing these models. Imagine you're a developer, and you just read this for the very first time, and you're trying to use it. You can do so much better than just sort of the bare API spec of what you'd often see. Include examples in the description. Include really detailed explanations of how things work. And I think that, again, also think about what is the easiest way for the model to represent the change that it wants to make. For file editing, as an example, writing a diff is actually... Let's take the most extreme example. You want the model to literally write a patch file. I think patch files have at the very beginning numbers of how many total lines change. That means before the model has actually written the edit, it needs to decide how many numbers or how many lines are going to change.Swyx [00:26:52]: Don't quote me on that.Erik [00:26:54]: I think it's something like that, but I don't know if that's exactly the diff format. But you can certainly have formats that are much easier to express without messing up than others. And I like to think about how much human effort goes into designing human interfaces for things. It's incredible. This is entirely what FrontEnd is about, is creating better interfaces to kind of do the same things. And I think that same amount of attention and effort needs to go into creating agent computer interfaces.Swyx [00:27:19]: It's a topic we've discussed, ACI or whatever that looks like. I would also shout out that I think you released some of these toolings as part of computer use as well. And people really liked it. It's all open source if people want to check it out. I'm curious if there's an environment element that complements the tools. So how do you... Do you have a sandbox? Is it just Docker? Because that can be slow or resource intensive. Do you have anything else that you would recommend?Erik [00:27:47]: I don't think I can talk about sort of public details or about private details about how we implement our sandboxing. But obviously, we need to have sort of safe, secure, and fast sandboxes for training for the models to be able to practice writing code and working in an environment.Swyx [00:28:03]: I'm aware of a few startups working on agent sandboxing. E2B is a close friend of ours that Alessio has led around in, but also I think there's others where they're focusing on snapshotting memory so that it can do time travel for debugging. Computer use where you can control the mouse or keyboard or something like that. Whereas here, I think that the kinds of tools that we offer are very, very limited to coding agent work cases like bash, edit, you know, stuff like that. Yeah.Erik [00:28:30]: I think the computer use demo that we released is an extension of that. It has the same bash and edit tools, but it also has the computer tool that lets it get screenshots and move the mouse and keyboard. Yeah. So I definitely think there's sort of more general tools there. And again, the tools we released as part of SweetBench were, I'd say they're very specific for like editing files and doing bash, but at the same time, that's actually very general if you think about it. Like anything that you would do on a command line or like editing files, you can do with those tools. And so we do want those tools to feel like any sort of computer terminal work could be done with those same tools rather than making tools that were like very specific for SweetBench like run tests as its own tool, for instance. Yeah.Swyx [00:29:15]: You had a question about tests.Alessio [00:29:16]: Yeah, exactly. I saw there's no test writer tool. Is it because it generates the code and then you're running it against SweetBench anyway, so it doesn't really need to write the test or?Swyx [00:29:26]: Yeah.Erik [00:29:27]: So this is one of the interesting things about SweetBench is that the tests that the model's output is graded on are hidden from it. That's basically so that the model can't cheat by looking at the tests and writing the exact solution. And I'd say typically the model, the first thing it does is it usually writes a little script to reproduce the error. And again, most SweetBench tasks are like, hey, here's a bug that I found. I run this and I get this error. So the first thing the model does is try to reproduce that. So it's kind of been rerunning that script as a mini test. But yeah, sometimes the model will like accidentally introduce a bug that breaks some other tests and it doesn't know about that.Alessio [00:30:05]: And should we be redesigning any tools? We kind of talked about this and like having more examples, but I'm thinking even things of like Q as a query parameter in many APIs, it's like easier for the model to like re-query than read the Q. I'm sure it learned the Q by this point, but like, is there anything you've seen like building this where it's like, hey, if I were to redesign some CLI tools, some API tool, I would like change the way structure to make it better for LLMs?Erik [00:30:31]: I don't think I've thought enough about that off the top of my head, but certainly like just making everything more human friendly, like having like more detailed documentation and examples. I think examples are really good in things like descriptions, like so many, like just using the Linux command line, like how many times I do like dash dash help or look at the man page or something. It's like, just give me one example of like how I actually use this. Like I don't want to go read through a hundred flags. Just give me the most common example. But again, so you know, things that would be useful for a human, I think are also very useful for a model.Swyx [00:31:03]: Yeah. I mean, there's one thing that you cannot give to code agents that is useful for human is this access to the internet. I wonder how to design that in, because one of the issues that I also had with just the idea of a suite bench is that you can't do follow up questions. You can't like look around for similar implementations. These are all things that I do when I try to fix code and we don't do that. It's not, it wouldn't be fair, like it'd be too easy to cheat, but then also it's kind of not being fair to these agents because they're not operating in a real world situation. Like if I had a real world agent, of course I'm giving it access to the internet because I'm not trying to pass a benchmark. I don't have a question in there more, more just like, I feel like the most obvious tool access to the internet is not being used.Erik [00:31:47]: I think that that's really important for humans, but honestly the models have so much general knowledge from pre-training that it's, it's like less important for them. I feel like versioning, you know, if you're working on a newer thing that was like, they came after the knowledge cutoff, then yes, I think that's very important. I think actually this, this is like a broader problem that there is a divergence between Sweebench and like what customers will actually care about who are working on a coding agent for real use. And I think one of those there is like internet access and being able to like, how do you pull in outside information? I think another one is like, if you have a real coding agent, you don't want to have it start on a task and like spin its wheels for hours because you gave it a bad prompt. You want it to come back immediately and ask follow up questions and like really make sure it has a very detailed understanding of what to do, then go off for a few hours and do work. So I think that like real tasks are going to be much more interactive with the agent rather than this kind of like one shot system. And right now there's no benchmark that, that measures that. And maybe I think it'd be interesting to have some benchmark that is more interactive. I don't know if you're familiar with TauBench, but it's a, it's a customer service benchmark where there's basically one LLM that's playing the user or the customer that's getting support and another LLM that's playing the support agent and they interact and try to resolve the issue.Swyx [00:33:08]: Yeah. We talked to the LMSIS guys. Awesome. And they also did MTBench for people listening along. So maybe we need MTSWE-Bench. Sure. Yeah.Erik [00:33:16]: So maybe, you know, you could have something where like before the SWE-Bench task starts, you have like a few back and forths with kind of like the, the author who can answer follow up questions about what they want the task to do. And of course you'd need to do that where it doesn't cheat and like just get the exact, the exact thing out of the human or out of the sort of user. But I think that would be a really interesting thing to see. If you look at sort of existing agent work, like a Repl.it's coding agent, I think one of the really great UX things they do is like first having the agent create a plan and then having the human approve that plan or give feedback. I think for agents in general, like having a planning step at the beginning, one, just having that plan will improve performance on the downstream task just because it's kind of like a bigger chain of thought, but also it's just such a better UX. It's way easier for a human to iterate on a plan with a model rather than iterating on the full task that sort of has a much slower time through each loop. If the human has approved this implementation plan, I think it makes the end result a lot more sort of auditable and trustable. So I think there's a lot of things sort of outside of SweetBench that will be very important for real agent usage in the world. Yeah.Swyx [00:34:27]: I will say also, there's a couple of comments on names that you dropped. Copilot also does the plan stage before it writes code. I feel like those approaches have generally been less Twitter successful because it's not prompt to code, it's prompt plan code. You know, so there's a little bit of friction in there, but it's not much. Like it's, it actually, it's, it, you get a lot for what it's worth. I also like the way that Devin does it, where you can sort of edit the plan as it goes along. And then the other thing with Repl.it, we had a, we hosted a sort of dev day pregame with Repl.it and they also commented about multi-agents. So like having two agents kind of bounce off of each other. I think it's a similar approach to what you're talking about with kind of the few shot example, just as in the prompts of clarifying what the agent wants. But typically I think this would be implemented as a tool calling another agent, like a sub-agent I don't know if you explored that, do you like that idea?Erik [00:35:20]: I haven't explored this enough, but I've definitely heard of people having good success with this. Of almost like basically having a few different sort of personas of agents, even if they're all the same LLM. I think this is one thing with multi-agent that a lot of people will kind of get confused by is they think it has to be different models behind each thing. But really it's sort of usually the same, the same model with different prompts. And yet having one, having them have different personas to kind of bring different sort of thoughts and priorities to the table. I've seen that work very well and sort of create a much more thorough and thought outSwyx [00:35:53]: response.Erik [00:35:53]: I think the downside is just that it adds a lot of complexity and it adds a lot of extra tokens. So I think it depends what you care about. If you want a plan that's very thorough and detailed, I think it's great. If you want a really quick, just like write this function, you know, you probably don't want to do that and have like a bunch of different calls before it does this.Alessio [00:36:11]: And just talking about the prompt, why are XML tags so good in Cloud? I think initially people were like, oh, maybe you're just getting lucky with XML. But I saw obviously you use them in your own agent prompts, so they must work. And why is it so model specific to your family?Erik [00:36:26]: Yeah, I think that there's, again, I'm not sure how much I can say, but I think there's historical reasons that internally we've preferred XML. I think also the one broader thing I'll say is that if you look at certain kinds of outputs, there is overhead to outputting in JSON. If you're trying to output code in JSON, there's a lot of extra escaping that needs to be done, and that actually hurts model performance across the board. Versus if you're in just a single XML tag, there's none of that sort of escaping thatSwyx [00:36:58]: needs to happen.Erik [00:36:58]: That being said, I haven't tried having it write HTML and XML, which maybe then you start running into weird escaping things there. I'm not sure. But yeah, I'd say that's some historical reasons, and there's less overhead of escaping.Swyx [00:37:12]: I use XML in other models as well, and it's just a really nice way to make sure that the thing that ends is tied to the thing that starts. That's the only way to do code fences where you're pretty sure example one start, example one end, that is one cohesive unit.Alessio [00:37:30]: Because the braces are nondescriptive. Yeah, exactly.Swyx [00:37:33]: That would be my simple reason. XML is good for everyone, not just Cloud. Cloud was just the first one to popularize it, I think.Erik [00:37:39]: I do definitely prefer to read XML than read JSON.Alessio [00:37:43]: Any other details that are maybe underappreciated? I know, for example, you had the absolute paths versus relative. Any other fun nuggets?Erik [00:37:52]: I think that's a good sort of anecdote to mention about iterating on tools. Like I said, spend time prompt engineering your tools, and don't just write the prompt, but write the tool, and then actually give it to the model and read a bunch of transcripts about how the model tries to use the tool. I think by doing that, you will find areas where the model misunderstands a tool or makes mistakes, and then basically change the tool to make it foolproof. There's this Japanese term, pokayoke, about making tools mistake-proof. You know, the classic idea is you can have a plug that can fit either way, and that's dangerous, or you can make it asymmetric so that it can't fit this way, it has to go like this, and that's a better tool because you can't use it the wrong way. So for this example of absolute paths, one of the things that we saw while testing these tools is, oh, if the model has done CD and moved to a different directory, it would often get confused when trying to use the tool because it's now in a different directory, and so the paths aren't lining up. So we said, oh, well, let's just force the tool to always require an absolute path, and then that's easy for the model to understand. It knows sort of where it is. It knows where the files are. And then once we have it always giving absolute paths, it never messes up even, like, no matter where it is because it just, if you're using an absolute path, it doesn't matter whereSwyx [00:39:13]: you are.Erik [00:39:13]: So iterations like that, you know, let us make the tool foolproof for the model. I'd say there's other categories of things where we see, oh, if the model, you know, opens vim, like, you know, it's never going to return. And so the tool is stuck.Swyx [00:39:28]: Did it get stuck? Yeah. Get out of vim. What?Erik [00:39:31]: Well, because the tool is, like, it just text in, text out. It's not interactive. So it's not like the model doesn't know how to get out of vim. It's that the way that the tool is, like, hooked up to the computer is not interactive. Yes, I mean, there is the meme of no one knows how to get out of vim. You know, basically, we just added instructions in the tool of, like, hey, don't launch commands that don't return.Swyx [00:39:54]: Yeah, like, don't launch vim.Erik [00:39:55]: Don't launch whatever. If you do need to do something, you know, put an ampersand after it to launch it in the background. And so, like, just, you know, putting kind of instructions like that just right in the description for the tool really helps the model. And I think, like, that's an underutilized space of prompt engineering, where, like, people might try to do that in the overall prompt, but just put that in the tool itself so the model knows that it's, like, for this tool, this is what's relevant.Swyx [00:40:20]: You said you worked on the function calling and tool use before you actually started this vBench work, right? Was there any surprises? Because you basically went from creator of that API to user of that API. Any surprises or changes you would make now that you have extensively dog-fooded in a state-of-the-art agent?Erik [00:40:39]: I want us to make, like, maybe, like, a little bit less verbose SDK. I think some way, like, right now, it just takes, I think we sort of force people to do the best practices of writing out sort of these full JSON schemas, but it would be really nice if you could just pass in a Python function as a tool. I think that could be something nice.Swyx [00:40:58]: I think that there's a lot of, like, Python- There's helper libraries. ... structure, you know. I don't know if there's anyone else that is specializing for Anthropic. Maybe Jeremy Howard's and Simon Willis's stuff. They all have Cloud-specific stuff that they are working on. Cloudette. Cloudette, exactly. I also wanted to spend a little bit of time with SuiteAgent. It seems like a very general framework. Like, is there a reason you picked it apart from it's the same authors as vBench, or?Erik [00:41:21]: The main thing we wanted to go with was the same authors as vBench, so it just felt sort of like the safest, most neutral option. And it was, you know, very high quality. It was very easy to modify, to work with. I would say it also actually, their underlying framework is sort of this, it's like, youSwyx [00:41:39]: know, think, act, observe.Erik [00:41:40]: That they kind of go through this loop, which is like a little bit more hard-coded than what we wanted to do, but it's still very close. That's still very general. So it felt like a good match as sort of the starting point for our agent. And we had already sort of worked with and talked with the SWE-Bench people directly, so it felt nice to just have, you know, we already know the authors. This will be easy to work with.Swyx [00:42:00]: I'll share a little bit of like, this all seems disconnected, but once you figure out the people and where they go to school, it all makes sense. So it's all Princeton. Yeah, the SWE-Bench and SuiteAgent.Erik [00:42:11]: It's a group out of Princeton.Swyx [00:42:12]: Yeah, and we had Shun Yu on the pod, and he came up with the React paradigm, and that's think, act, observe. That's all React. So they're all friends. Yep, yeah, exactly.Erik [00:42:22]: And you know, if you actually read our traces of our submission, you can actually see like think, act, observe in our logs. And we just didn't even change the printing code. So it's like doing still function calls under the hood, and the model can do sort of multiple function calls in a row without thinking in between if it wants to. But yeah, so a lot of similarities and a lot of things we inherited from SuiteAgent just as a starting point for the framework.Alessio [00:42:47]: Any thoughts about other agent frameworks? I think there's, you know, the whole gamut from very simple to like very complex.Swyx [00:42:53]: Autogen, CooEI, LandGraph. Yeah, yeah.Erik [00:42:56]: I think I haven't explored a lot of them in detail. I would say with agent frameworks in general, they can certainly save you some like boilerplate. But I think there's actually this like downside of making agents too easy, where you end up very quickly like building a much more complex system than you need. And suddenly, you know, instead of having one prompt, you have five agents that are talking to each other and doing a dialogue. And it's like, because the framework made that 10 lines to do, you end up building something that's way too complex. So I think I would actually caution people to like try to start without these frameworks if you can, because you'll be closer to the raw prompts and be able to sort of directly understand what's going on. I think a lot of times these frameworks also, by trying to make everything feel really magical, you end up sort of really hiding what the actual prompt and output of the model is, and that can make it much harder to debug. So certainly these things have a place, and I think they do really help at getting rid of boilerplate, but they come with this cost of obfuscating what's really happening and making it too easy to very quickly add a lot of complexity. So yeah, I would recommend people to like try it from scratch, and it's like not that bad.Alessio [00:44:08]: Would you rather have like a framework of tools? Do you almost see like, hey, it's maybe easier to get tools that are already well curated, like the ones that you build, if I had an easy way to get the best tool from you, andSwyx [00:44:21]: like you maintain the definition?Alessio [00:44:22]: Or yeah, any thoughts on how you want to formalize tool sharing?Erik [00:44:26]: Yeah, I think that's something that we're certainly interested in exploring, and I think there is space for sort of these general tools that will be very broadly applicable. But at the same time, most people that are building on these, they do have much more specific things that they're trying to do. You know, I think that might be useful for hobbyists and demos, but the ultimate end applications are going to be bespoke. And so we just want to make sure that the model's great at any tool that it uses. But certainly something we're exploring.Alessio [00:44:52]: So everything bespoke, no frameworks, no anything.Swyx [00:44:55]: Just for now, for now.Erik [00:44:56]: Yeah, I would say that like the best thing I've seen is people building up from like, build some good util functions, and then you can use those as building blocks. Yeah, yeah.Alessio [00:45:05]: I have a utils folder, or like all these scripts. My framework is like def, call, and tropic. And then I just put all the defaults.Swyx [00:45:12]: Yeah, exactly. There's a startup hidden in every utils folder, you know? No, totally not. Like, if you use it enough, like it's a startup, you know? At some point. I'm kind of curious, is there a maximum length of turns that it took? Like, what was the longest run? I actually don't.Erik [00:45:27]: I mean, it had basically infinite turns until it ran into a 200k context. I should have looked this up. I don't know. And so for some of those failed cases where it eventually ran out of context, I mean, it was over 100 turns. I'm trying to remember like the longest successful run, but I think it was definitely over 100 turns that some of the times.Swyx [00:45:48]: Which is not that much. It's a coffee break. Yeah.Erik [00:45:52]: But certainly, you know, these things can be a lot of turns. And I think that's because some of these things are really hard, where it's going to take, you know, many tries to do it. And if you think about like, think about a task that takes a human four hours to do. Think about how many different files you read, and like times you edit a file in four hours. That's a lot more than 100.Alessio [00:46:10]: How many times you open Twitter because you get distracted. But if you had a lot more compute, what's kind of like the return on the extra compute now? So like, you know, if you had thousands of turns or like whatever, like how much better would it get?Erik [00:46:23]: Yeah, this I don't know. And I think this is, I think sort of one of the open areas of research in general with agents is memory and sort of how do you have something that can do work beyond its context length where you're just purely appending. So you mentioned earlier things like pruning bad paths. I think there's a lot of interesting work around there. Can you just roll back but summarize, hey, don't go down this path? There be dragons. Yeah, I think that's very interesting that you could have something that that uses way more tokens without ever using at a time more than 200k. So I think that's very interesting. I think the biggest thing is like, can you make the model sort of losslessly summarize what it's learned from trying different approaches and bring things back? I think that's sort of the big challenge.Swyx [00:47:11]: What about different models?Alessio [00:47:12]: So you have Haiku, which is like, you know, cheaper. So you're like, well, what if I have a Haiku to do a lot of these smaller things and then put it back up?Erik [00:47:20]: I think Cursor might have said that they actually have a separate model for file editing.Swyx [00:47:25]: I'm trying to remember.Erik [00:47:25]: I think they were on maybe the Lex Fridman podcast where they said they have a bigger model, like write what the code should be and then a different model, like apply it. So I think there's a lot of interesting room for stuff like that. Yeah, fast supply.Swyx [00:47:37]: We actually did a pod with Fireworks that they worked with on. It's speculative decoding.Erik [00:47:41]: But I think there's also really interesting things about like, you know, paring down input tokens as well, especially sometimes the models trying to read like a 10,000 line file. That's a lot of tokens. And most of it is actually not going to be relevant. I think it'd be really interesting to like delegate that to Haiku. Haiku read this file and just pull out the most relevant functions. And then, you know, Sonnet reads just those and you save 90% on tokens. I think there's a lot of really interesting room for things like that. And again, we were just trying to do sort of the simplest, most minimal thing and show that it works. I'm really hoping that people, sort of the agent community builds things like that on top of our models. That's, again, why we released these tools. We're not going to go and do lots more submissions to SWE-Bench and try to prompt engineer this and build a bigger system. We want people to like the ecosystem to do that on top of our models. But yeah, so I think that's a really interesting one.Swyx [00:48:32]: It turns out, I think you did do 3.5 Haiku with your tools and it scored a 40.6. Yes.Erik [00:48:38]: So it did very well. It itself is actually very smart, which is great. But we haven't done any experiments with this combination of the two models. But yeah, I think that's one of the exciting things is that how well Haiku 3.5 did on SWE-Bench shows that sort of even our smallest, fastest model is very good at sort of thinking agentically and working on hard problems. Like it's not just sort of for writing simple text anymore.Alessio [00:49:02]: And I know you're not going to talk about it, but like Sonnet is not even supposed to be the best model, you know? Like Opus, it's kind of like we left it at three back in the corner intro. At some point, I'm sure the new Opus will come out. And if you had Opus Plus on it, that sounds very, very good.Swyx [00:49:19]: There's a run with SuiteAgent plus Opus, but that's the official SWE-Bench guys doing it.Erik [00:49:24]: That was the older, you know, 3.0.Swyx [00:49:25]: You didn't do yours. Yeah. Okay. Did you want to? I mean, you could just change the model name.Erik [00:49:31]: I think we didn't submit it, but I think we included it in our model card.Swyx [00:49:35]: Okay.Erik [00:49:35]: We included the score as a comparison. Yeah.Swyx [00:49:38]: Yeah.Erik [00:49:38]: And Sonnet and Haiku, actually, I think the new ones, they both outperformed the original Opus. Yeah. I did see that.Swyx [00:49:44]: Yeah. It's a little bit hard to find. Yeah.Erik [00:49:47]: It's not an exciting score, so we didn't feel like they need to submit it to the benchmark.Swyx [00:49:52]: We can cut over to computer use if we're okay with moving on to topics on this, if anything else. I think we're good.Erik [00:49:58]: I'm trying to think if there's anything else SWE-Bench related.Swyx [00:50:02]: It doesn't have to be also just specifically SWE-Bench, but just your thoughts on building agents, because you are one of the few people that have reached this leaderboard on building a coding agent. This is the state of the art. It's surprisingly not that hard to reach with some good principles. Right. There's obviously a ton of low-hanging fruit that we covered. Your thoughts on if you were to build a coding agent startup, what next?Erik [00:50:24]: I think the really interesting question for me, for all the startups out there, is this kind of divergence between the benchmarks and what real customers will want. So I'm curious, maybe the next time you have a coding agent startup on the podcast, you should ask them that. What are the differences that they're starting to make? Tomorrow.Swyx [00:50:40]: Oh, perfect, perfect. Yeah.Erik [00:50:41]: I'm actually very curious what they will see, because I also have seen, I feel like it's slowed down a little bit if I don't see the startups submitting to SWE-Bench that much anymore.Swyx [00:50:52]: Because of the traces, the trace. So we had Cosign on, they had a 50-something on full, on SWE-Bench full, which is the hardest one, and they were rejected because they didn't want to submit their traces. Yep. IP, you know? Yeah, that makes sense, that makes sense. Actually, tomorrow we're talking to Bolt, which is a cloud customer. You guys actually published a case study with them. I assume you weren't involved with that, but they were very happy with Cloud. Cool. One of the biggest launches of the year. Yeah, totally. We actually happened to b

EventUp
87. Driving Brand Success Through Creativity and Inclusivity with Lisa Lee formerly with Diageo

EventUp

Play Episode Listen Later Nov 20, 2024 40:52


Lisa Lee, former Director of Smirnoff Pre-Mix and Lone River at Diageo, joins Amanda Ma, CEO & Founder of Innovate Marketing Group, to discover strategies for brand reinvention, cultural relevance, and inclusive marketing. Listen now! About the guest: Lisa is a Results-driven Marketing executive with over 20 years of experience in building and scaling brands. Expertise in brand strategy, innovation, digital marketing and consumer engagement. Proven track record of driving revenue growth, leading brand reinventions including enhancing brand equity, and leading high-performance marketing teams. Adept at leveraging data-driven insights to craft compelling marketing strategies in fast-paced dynamic environments. EventUp is brought to you by Innovate Marketing Group. An award-winning Corporate Event and Experiential Marketing Agency based in Los Angeles, California. Creating Nationwide Immersive Event Experiences to help brands connect with people. To learn more, click here⁠⁠. Follow us! Find us on ⁠⁠LinkedIn, ⁠⁠⁠⁠EventUp Podcast⁠, and ⁠⁠Instagram

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's

PBE Podcast
141: Induced Seismicity in the Permian Basin

PBE Podcast

Play Episode Listen Later Oct 16, 2024


Norman is a versatile Geologic Consultant with 15+ years of success in exploration, development, and petroleum geology. History of success in on-site support, drilling strategy optimization, and team mentorship. Expertise in wireline log interpretation, H2S risk management, and thorough well-logging in diverse basins worldwide. Skilled in productivity assessments, geologic modeling, and data accuracy, driving impactful contributions to production and exploration projects. Adept at instructing, leading, and mentoring new team members, elevating their expertise and professional development. Leverage advanced knowledge in petroleum production estimation software, enabling precise petrophysical analysis, reservoir characterization, and forward-thinking production forecasting.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Bret Taylor: The AI Bubble and What Happens Now | How the Cost of Chips and Models Will Change in AI | Will Companies Build Their Own Software | Why Pre-Training is for Morons | Leaderships Lessons from Mark Zuckerberg

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Oct 2, 2024 70:39


Bret Taylor is CEO and Co-Founder of Sierra, a conversational AI platform for businesses. Previously, he served as Co-CEO of Salesforce. Prior to Salesforce, Bret founded Quip and was CTO of Facebook. He started his career at Google, where he co-created Google Maps. Bret serves on the board of OpenAI. In Today's Discussion with Bret Taylor: 1. The Biggest Misconceptions About AI Today: Does Bret believe we are in an AI bubble or not? Why does Bret believe it is BS that companies will all use AI to build their own software? What does no one realise about the cost of compute today in a world of AI? 2. Foundation Models: The Fastest Depreciating Asset in History? As a board member of OpenAI, does Bret agree that foundation models are the fastest depreciating asset in history? Will every application be subsumed by foundation models? What will be standalone? How does Bret think about the price dumping we are seeing in the foundation model landscape? Does Bret believe we will continue to see small foundation model companies (Character, Adept, Inflection) be acquired by larger incumbents? 3. The Biggest Opportunity in AI Today: The Death of the Phone + Website: What does Bret believe are the biggest opportunities in the application layer of AI today? Why does Bret put forward the case that we will continue to see the role of the phone reduce in consumer lives? How does AI make that happen? What does Bret mean when he says we are moving from a world of software rules to guardrails? What does AI mean for the future of websites? How does Bret expect consumers to interact with their favourite brands in 10 years? 4. Bret Taylor: Ask Me Anything: Zuck, Leadership, Fundraising: Bret has worked with Zuck, Tobi @ Shopify, Marc Benioff and more, what are his biggest lessons from each of them on great leadership? How did Bret come to choose Peter @ Benchmark to lead his first round? What advice does Bret have to other VCs on how to be a great VC? Bret is on the board of OpenAI, what have been his biggest lessons from OpenAI on what it takes to be a great board member?  

Flipping Houses for Rookies
Episode #426 Analyzing Creative Real Estate Deals Easily!

Flipping Houses for Rookies

Play Episode Listen Later Sep 30, 2024 73:44


Assessing a potential creative real estate deal involves taking the right steps, all of which lead to successful deals that don't attract your personal finances after the deal is made. So, being Adept at quickly analyzing properties by not only ruling out the flops but when recognizing and securing the promising profitable deals you need and want. Although this sounds simple enough, too many inexperienced Creative Real Estate entrepreneurs walk away from deals because they don't look inside of the numbers, or fall weak on the deal structure by over - looking simple math, in fact, be pleasantly surprised how easy it is to make offers if you do head math without a calculator. By understanding and following today's simple guidelines and techniques you will soon do deals by mistake, if you just try, whether you are rookie or Pro. There is something to rattle on what you think you already know and do. so tune in now

Learn English Through Listening
Adept English Explains How To Use News Stories For Language Learning Ep 774

Learn English Through Listening

Play Episode Listen Later Sep 2, 2024 14:00


What if everything you thought about learning British English was holding you back? This lesson https://adeptenglish.com/lessons/ will challenge your usual learning approach and show you a more effective way to speak naturally and confidently. A reminder that we have launched our podcast subscription service. For the price of just one nice coffee a month you can listen to 8 new subscriber only episodes. So if you enjoy the podcast and want more! Please consider joining my premium podcast subscription https://podcasters.spotify.com/pod/show/adeptenglish/subscribe. Your support will help me continue to create the fascinating English listening content you enjoy. We have a FAQ explaining the paid subscription and how to sign-up here: http://adeptenglish.com/faq/subscription-faq/ Ready to take your English to the next level? In today's lesson, we're diving into real-world news stories that will introduce you to fresh, practical vocabulary you'll actually use in conversations. By the end of this lesson, you'll not only understand new words, but you'll also know how to apply them naturally in your daily life. Stick with us until the end, because there's a powerful tip coming up that will help you retain and use these new words with confidence. Don't miss it—this could be the key to making your English sound fluent https://adeptenglish.com/english/fluency/ and natural! "Freedom means the freedom to disagree." Angela Merkel ✔️ Lesson transcript: https://adeptenglish.com/lessons/learn-english-language-news-for-language-learning/ Follow and subscribe to our FREE English language podcast, wherever you listen https://adeptenglish.com/english/listening/ or watch your podcasts.

Money Tales
Dvrgnt Adventures, with B. Pagels-Minor

Money Tales

Play Episode Listen Later Aug 29, 2024 36:46


In this episode of Money Tales, our guest is B. Pagels-Minor. B. graduated college in 2008 and quickly landed a lucrative consulting job. But when the economy crashed, B. ended up working as a barista and then as a store manager at Target. They entered the role at Target with a lot of confidence. B. quickly learned that leadership isn't about being the smartest; it's about earning your team's respect. If the team doesn't follow you, it's you who will go. After Target, B. faced multiple near-disasters while experimenting with different roles and companies, trying to find what best suited their skills. This mindset of trial and error proved vital in navigating their career journey. B. is a distinguished award-winning product strategist, acclaimed podcast host within the Top 25% on Spotify, seasoned startup advisor, investor, and performance coach. As a highly regarded thought leader in product management and agile technologies, B. has steered international product teams at Fortune 100 and FAANG companies. With over a decade of service as a non-profit director, B. has driven large-scale technological and business strategy transformations across historically complex industries such as healthcare, software, and finance. Throughout an illustrious career, B. has supported revenue generation exceeding $20 billion and delivered exponential investor value. Recognized for a proven track record, B. is an experienced and growth-focused professional known for spearheading the development and execution of business, product, and culture strategies and operating as a qualified financial expert across board roles. Adept at identifying product-market fit and driving efficient scaling, key competencies include business development, product management, safety & regulatory compliance, revenue & margins growth, and cross-functional collaboration. Currently, B. operates in the startup ecosystem as an investor and trusted advisor. B. has helped their startups successfully raise hundreds of millions of dollars and nurtured companies from pre-seed to seed and from seed and beyond. B. was recruited into and led the listening team at Sprout Social and then led the acquisition and integration of a chief competitor, Simply Measured, effectively positioning the company for its final round of funding that ultimately led to a multibillion-dollar exit. B. was then recruited to the team at Apple to help lead the agile transformation for the App Store Developer Tools team and owned the experience and quality of App Store Connect, the front door to the App Store. Under B.'s leadership, App Store Connect was overhauled entirely to dramatically increase the speed and efficacy of various tooling to increase the number of apps submitted substantially and approved each day while increasing the overall satisfaction score of the product by three basis points in 2 quarters. B's success with overhauling the systems led to an opportunity to lead quality of the App Store Connect product, ownership of the App Store Connect App, and ownership of regulatory and compliance programs. B. led the integration of GDPR into the App Store, the overhaul of the App review process to support the Korea and Brazil Age Verification requirements, and also owned the relationship between App Store legal and the App Store team, including being the translator between legal and tech teams due to their experience having attended law school. The overall changes to the App Store improved relationships across critical partners like Microsoft and EA and helped the App Store exceed $60BN in revenue. Later, B. was recruited to lead the product function for finance and membership data for Netflix. Chief responsibilities included SOX compliance and ownership; reporting to finance, audit, and various finance and tax regulators; and principally responsible for helping develop insights to help supercharge Netflix's ability to right size content value and cost. Earlier, B.

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Another AI Sorta Acquisition as Character AI Founders Head (Back) to Google"

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Aug 5, 2024 15:07


Inflection, Adept and now Character. All have been sorta-but-not-exactly acquired. NLW explores the trend and what it means for AI. Plus -- what AI had to do with the stock market crash. Concerned about being spied on? Tired of censored responses? AI Daily Brief listeners receive a 20% discount on Venice Pro. Visit ⁠⁠⁠⁠⁠⁠⁠⁠https://venice.ai/nlw ⁠⁠⁠⁠⁠⁠⁠and enter the discount code NLWDAILYBRIEF. Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'podcast' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown

The John Batchelor Show
CLIMATE COMMON SENSE: 1/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor)

The John Batchelor Show

Play Episode Listen Later Aug 4, 2024 9:45


CLIMATE COMMON SENSE: 1/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor) https://www.amazon.com/Adapt-Be-Adept-Responses-Climate/dp/0817924558/ref=sr_1_1?dchild=1&qid=1618603521&refinements=p_27%3ATerry+Anderson&s=books&sr=1-1 How can markets help us adapt to the challenges of climate change? The editor Terry L. Anderson brings together this collection of essays featuring the work of nine leading policy analysts, who argue that market forces are just as important as government regulation in shaping climate policy—and should be at the heart of our response to helping societies adapt to climate change. Anderson notes in his introduction that most current climate policies such as the Paris Agreement require hard-to-enforce collective action and focus on reducing or mitigating greenhouse gases rather than adapting to their negative effects. Adaptive actions can typically deliver much more, faster and more cheaply than any realistic climate policy. The authors tackle a range of issues: the hidden costs of renewable energy sources, the political obstacles surrounding climate change policy, insurance and financial instruments for pricing risk of exposure to the effects of climate change, and more Terry Anderson @HooverInst https://thehill.com/opinion/energy-environment/547525-a-better-approach-to-climate-change-for-stateshttps://thehill.com/opinion/energy-environment/547764-the-urge-to-complicate-and-climatize-trade-policy?rl=1 https://thehill.com/opinion/energy-environment/548667-climate-change-to-adapt-is-to-be-human 1940 BUCHAREST

The John Batchelor Show
CLIMATE COMMON SENSE: 2/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor)

The John Batchelor Show

Play Episode Listen Later Aug 4, 2024 9:05


CLIMATE COMMON SENSE: 2/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor) https://www.amazon.com/Adapt-Be-Adept-Responses-Climate/dp/0817924558/ref=sr_1_1?dchild=1&qid=1618603521&refinements=p_27%3ATerry+Anderson&s=books&sr=1-1 How can markets help us adapt to the challenges of climate change? The editor Terry L. Anderson brings together this collection of essays featuring the work of nine leading policy analysts, who argue that market forces are just as important as government regulation in shaping climate policy—and should be at the heart of our response to helping societies adapt to climate change. Anderson notes in his introduction that most current climate policies such as the Paris Agreement require hard-to-enforce collective action and focus on reducing or mitigating greenhouse gases rather than adapting to their negative effects. Adaptive actions can typically deliver much more, faster and more cheaply than any realistic climate policy. The authors tackle a range of issues: the hidden costs of renewable energy sources, the political obstacles surrounding climate change policy, insurance and financial instruments for pricing risk of exposure to the effects of climate change, and more Terry Anderson @HooverInst https://thehill.com/opinion/energy-environment/547525-a-better-approach-to-climate-change-for-stateshttps://thehill.com/opinion/energy-environment/547764-the-urge-to-complicate-and-climatize-trade-policy?rl=1 https://thehill.com/opinion/energy-environment/548667-climate-change-to-adapt-is-to-be-human 1920 FRANCE

The John Batchelor Show
CLIMATE COMMON SENSE: 3/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor)

The John Batchelor Show

Play Episode Listen Later Aug 4, 2024 12:50


CLIMATE COMMON SENSE: 3/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor) https://www.amazon.com/Adapt-Be-Adept-Responses-Climate/dp/0817924558/ref=sr_1_1?dchild=1&qid=1618603521&refinements=p_27%3ATerry+Anderson&s=books&sr=1-1 How can markets help us adapt to the challenges of climate change? The editor Terry L. Anderson brings together this collection of essays featuring the work of nine leading policy analysts, who argue that market forces are just as important as government regulation in shaping climate policy—and should be at the heart of our response to helping societies adapt to climate change. Anderson notes in his introduction that most current climate policies such as the Paris Agreement require hard-to-enforce collective action and focus on reducing or mitigating greenhouse gases rather than adapting to their negative effects. Adaptive actions can typically deliver much more, faster and more cheaply than any realistic climate policy. The authors tackle a range of issues: the hidden costs of renewable energy sources, the political obstacles surrounding climate change policy, insurance and financial instruments for pricing risk of exposure to the effects of climate change, and more Terry Anderson @HooverInst https://thehill.com/opinion/energy-environment/547525-a-better-approach-to-climate-change-for-stateshttps://thehill.com/opinion/energy-environment/547764-the-urge-to-complicate-and-climatize-trade-policy?rl=1 https://thehill.com/opinion/energy-environment/548667-climate-change-to-adapt-is-to-be-human1940 NACA

The John Batchelor Show
CLIMATE COMMON SENSE: 4/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor)

The John Batchelor Show

Play Episode Listen Later Aug 4, 2024 7:50


CLIMATE COMMON SENSE: 4/4: Adapt and Be Adept: Market Responses to Climate Change by Terry Anderson (Editor) https://www.amazon.com/Adapt-Be-Adept-Responses-Climate/dp/0817924558/ref=sr_1_1?dchild=1&qid=1618603521&refinements=p_27%3ATerry+Anderson&s=books&sr=1-1 How can markets help us adapt to the challenges of climate change? The editor Terry L. Anderson brings together this collection of essays featuring the work of nine leading policy analysts, who argue that market forces are just as important as government regulation in shaping climate policy—and should be at the heart of our response to helping societies adapt to climate change. Anderson notes in his introduction that most current climate policies such as the Paris Agreement require hard-to-enforce collective action and focus on reducing or mitigating greenhouse gases rather than adapting to their negative effects. Adaptive actions can typically deliver much more, faster and more cheaply than any realistic climate policy. The authors tackle a range of issues: the hidden costs of renewable energy sources, the political obstacles surrounding climate change policy, insurance and financial instruments for pricing risk of exposure to the effects of climate change, and more Terry Anderson @HooverInst https://thehill.com/opinion/energy-environment/547525-a-better-approach-to-climate-change-for-stateshttps://thehill.com/opinion/energy-environment/547764-the-urge-to-complicate-and-climatize-trade-policy?rl=1 https://thehill.com/opinion/energy-environment/548667-climate-change-to-adapt-is-to-be-human2011 PYRAMID EGYPT

The John Batchelor Show
PREVIEW: WILDFIRES: CLIMATE: Conversation with colleague Terry Anderson of Hoover Institution, author of "Adapt and Be Adept," regarding how to prepare for climate change events, including precautions for living in forests during wildfire season

The John Batchelor Show

Play Episode Listen Later Aug 3, 2024 2:37


PREVIEW: WILDFIRES: CLIMATE: Conversation with colleague Terry Anderson of Hoover Institution, author of "Adapt and Be Adept," regarding how to prepare for climate change events, including precautions for living in forests during wildfire season out West. More later. 1901 Clark County Nevada

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Why We Are in a Bubble & Now is Frothier Than 2021 | Why $1M ARR is a BS Milestone for Series A | Why Seed Pricing is Rational & Large Seed Rounds Have Less Risk | Why Many AI Apps Have BS Revenue & Are Not Sustainable with Saam Motamedi

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jul 15, 2024 66:25


Saam Motamedi is a General Partner at Greylock, where he has led investments in Abnormal Security (incubated at Greylock), Apiiro Security and Opal Security, as well as AI companies like Adept, Braintrst, Cresta, Predibase, Snorkel, and more. Before Greylock, Saam founded Guru Labs, a machine learning-driven fintech startup, and worked in product management at RelateIQ, one of the first applied AI software companies. In Today's Conversation We Discuss: 1. Seed Today is Frothier than 2021: How does Saam evaluate the seed market today? With seed pricing being so high, how does he reflect on his own price sensitivity? When does he say too much and does not do it? Despite seed pricing being higher than ever before, why does Saam believe it is rational? How has the competition at seed changed in the last few years? 2. Series B and Growth are not a Viable Asset Class Today: Why does Saam believe that you cannot make money at Series B today? Why has pricing gone through the roof? Who is the new competition? When does it make sense to "play the game on the field" vs say this is BS and do something else? What would need to happen in the public markets for Series B to be a viable asset class again? 3. Markets vs Founders: The Billion Dollar Mistake and Lessons: How does Saam prioritise between founder vs market? What have been Saam's biggest lessons when it comes to market sizing and timing? What is Saam's biggest miss? How did it change his approach and company evaluation? Which other VC would Saam most like to swap portfolios with? Why them? 4. Saam Motamedi: AMA: What does Saam know now that he wishes he had known when he got into VC? Saam has had a meteoric rise in Greylock, what advice does Saam have for those younger investors look to really scale within a firm? Sourcing, selecting and servicing: Where is he best? Where is he worst? Why does Saam believe that most VCs do not add value? 20VC: Why We Are in a Bubble & Now is Frothier Than 2021 | Why $1M ARR is a BS Milestone for Series A | Why Seed Pricing is Rational & Large Seed Rounds Have Less Risk | Why Many AI Apps Have BS Revenue & Are Not Sustainable with Saam Motamedi @ Greylock

This Week in Startups
The MEI vs DEI debate, tech press, Chime buys Salt Labs, and more! | E1974

This Week in Startups

Play Episode Listen Later Jul 2, 2024 80:45


This Week in Startups is brought to you by… Lemon.io - Hire pre-vetted remote developers, get 15% off your first 4 weeks of developer time at https://Lemon.io/twist Eight Sleep. Good sleep is the ultimate game changer. The newest generation of the pod, the Pod 4 ultra has arrived. Head to https://www.eightsleep.com/twist and use code TWIST to get $350 off the Pod 4 Ultra. Northwest Registered Agent. Northwest Registered Agent will form your business quickly and easily. In just 10 clicks and 10 minutes, set up your entire business identity—name, address, mail service, phone, email, website, and domain. For just $39 plus state fees, Northwest will handle your complete business identity. Visit ⁠http://northwestregisteredagent.com/twist⁠ today. * Todays show: Alex Wilhelm joins Jason to discuss the weekend debate regarding MEI/DEI (12:12), Amazon buys Adept's key talent (42:39), Chime buys Salt Labs (52:41), and more! * Timestamps: (0:00) Jason and Alex kick off the show (1:47) Weekend recap and political debate analysis (10:50) Media's role in political coverage (14:49 ) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist (16:12) The weekend debate regarding MEI/DEI and that TechCrunch article (27:50) Eight Sleep - Head to https://www.eightsleep.com/twist and use code TWIST to get $350 off the Pod 4 Ultra. (29:20) Discussion on DEI vs MEI and meritocracy in Silicon Valley (41:10) Northwest Registered Agent - For just $39 plus state fees, Northwest will handle your complete business identity. Visit https://www.northwestregisteredagent.com/twist today. (42:39) Amazon buys Adept's key talent (52:20) Chime's acquisition of Salt Labs (1:16:12) Challenges and strategies for EdTech companies * Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/ Check out the TWIST500: twist500.com * Subscribe to This Week in Startups on Apple: https://rb.gy/v19fcp * Mentioned on the show: https://techcrunch.com/2024/06/28/dei-more-like-common-decency-and-silicon-valley-is-saying-no-thanks https://techcrunch.com/2010/09/21/so-a-blogger-walks-into-a-bar https://x.com/alexandr_wang/status/1801331034916851995 https://techcrunch.com/2024/06/28/amazon-hires-founders-away-from-ai-startup-adept https://techcrunch.com/2024/03/19/microsoft-hires-inflection-founders-to-run-new-consumer-ai-division https://techcrunch.com/2024/03/21/microsoft-inflection-ai-investors-reid-hoffman-bill-gates https://www.youtube.com/watch?v=kna9E_3kFF0 https://www.chime.com/blog/chime-acquires-enterprise-employee-rewards-company-salt-labs https://www.saltlabs.com * Follow Alex: X: https://x.com/alex LinkedIn: ⁠https://www.linkedin.com/in/alexwilhelm/ * Follow Jason: X: https://twitter.com/Jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Thank you to our partners: (14:49) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist (27:50) Eight Sleep - Head to https://www.eightsleep.com/twist and use code TWIST to get $350 off the Pod 4 Ultra. (41:10) Northwest Registered Agent - For just $39 plus state fees, Northwest will handle your complete business identity. Visit https://www.northwestregisteredagent.com/twist today. * Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland * Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow TWiST: Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin Instagram: https://www.instagram.com/thisweekinstartups TikTok: https://www.tiktok.com/@thisweekinstartups Substack: https://twistartups.substack.com * Subscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916