Podcasts about A100

  • 133PODCASTS
  • 287EPISODES
  • 37mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about A100

Latest podcast episodes about A100

Easy German
598: Ein Meter Autobahn für 225.000 Euro

Easy German

Play Episode Listen Later Sep 6, 2025 33:20


Cari trinkt Hipster-Kaffee und ist erstaunt, dass Milch nicht immer "Milch" heißen darf. Manuel hat sich in ein Programm reingefuchst — wir erklären, was dieser Ausdruck bedeutet. Dann empfehlen wir eine Doku über Kokain und ärgern uns über eine teure Autobahn. Zum Abschluss beantworten wir eure Fragen zu deutscher Direktheit, Meditations-Apps und dem Einkaufsverhalten deutscher Politiker*innen.   Transkript und Vokabelhilfe Werde ein Easy German Mitglied und du bekommst unsere Vokabelhilfe, ein interaktives Transkript und Bonusmaterial zu jeder Episode: easygerman.org/membership   Sponsor Lingoda: Join the ultimate challenge with Lingoda Sprint this summer and get 50% cash back while learning German intensively in live classes. Get an additional 20€ discount when you sign up today with our code EASY20: https://try.lingoda.com/EasyGerman_September   Ausdruck der Woche: sich in etwas reinfuchsen sich reinfuchsen (Redensarten-Index)   Empfehlung der Woche Simplicissimus: Wir müssen über Kokain reden. (YouTube)   Das nervt: Berlin baut Autobahnen statt Fahrradwege A100 wird in Berlin eröffnet: "Kopenhagen, Wien, Paris gehen alle in eine andere Richtung"(Spiegel) Berliner Stadtautobahn: Umstrittenes Teilstück der A100 in Berlin eröffnet (Frankfurter Allgemeine Zeitung)   Eure Fragen Sepehr aus dem Iran fragt: Wie funktioniert die deutsche Direktheit? Alper fragt: Manuel, welche App nutzt du für die Meditation? Headspace Waking Up (Friend Referral Link) Vipassana Meditation William aus Großbritannien fragt: Gehen Politiker in Deutschland immer noch selbst einkaufen? Kanzler im Gespräch: "Gehen Sie selbst noch einkaufen?" Scholz antwortet auf akute Bürger-Fragen (Berliner Zeitung) Hast du eine Frage an uns? Auf easygerman.fm kannst du uns eine Sprachnachricht schicken.   Wichtige Vokabeln in dieser Episode Gefallen an etwas finden: etwas anfangen zu mögen oder daran Interesse entwickeln sich in etwas reinfuchsen: sich intensiv und eigenständig mit einem Thema beschäftigen autodidaktisch: durch Selbststudium, ohne formale Anleitung oder Ausbildung die Stadtautobahn: Schnellstraße innerhalb einer Stadt, meist ohne Kreuzungen oder Ampeln anecken: durch Verhalten oder Meinung bei anderen auf Ablehnung stoßen der Personenschutz: Maßnahmen zum Schutz einer gefährdeten Person, oft durch Sicherheitskräfte   Support Easy German and get interactive transcripts, live vocabulary and bonus content: easygerman.org/membership

rotz + wasser
Die rotz+wasser - Morningshow - Folge 142: Wie war das nochmal mit der Semmelrechnung ??

rotz + wasser

Play Episode Listen Later Sep 5, 2025 16:56


Olli hat auf Arbeit seinen Witz erzählt, und auch Benjamin hat ihn bei seinen Kindern gestest. Was sind Gensterputzer, und warum gibt es heute schon wieder 8 Sekunden Stille ?? Das, und wie der Witz ankam, erfahrt ihr in der heutigen Morningshow.

rotz + wasser
Die rotz+wasser - Morningshow - Folge 141: Habt ihr euch verdient !!

rotz + wasser

Play Episode Listen Later Sep 4, 2025 14:34


Benjamnin fragt, was wäre, wenn ihr was an eurem Körper verändern dürftet ?? Bei Benjamin und Olli, den beiden hübschesten Menschen des Universums, ist es natürlich schwierig, aber auch hier gibt es etwas... Seid gespannt !

Interview - Deutschlandfunk Kultur
Autobahn A100 - Verkehrsforscher: Berlin setzt auf ein Konzept der 50er Jahre

Interview - Deutschlandfunk Kultur

Play Episode Listen Later Sep 2, 2025 6:37


Der Neubau-Abschnitt der Autobahn 100 in Berlin ist ein Beispiel verfehlter Verkehrspolitik. Man habe an der Realität vorbeigeplant, denn jede Entlastungsstraße führe zu 90 Prozent mehr Verkehr, sagt der Verkehrswissenschaftler Helmut Holzapfel. Holzapfel, Helmut www.deutschlandfunkkultur.de, Interview

Hör doch mal zu
HDMZ249 - Ein selbstloser Akt zur Völkerverständigung

Hör doch mal zu

Play Episode Listen Later Sep 2, 2025 139:00


Hallo Ihr Lieben, eine gute Portion Befindlichkeiten, dann die Sicial Media Zitate und dann starten wir mit Robert Habeck in die Themen. Die Finanzierbarkeit des Sozialstaats istAnlaß zu Abstechern zu Faulpelzen znd Jobcentern. In Berlin gerade ganz oben auf der Aufregungsskala. die Verlängerung der A100, wir kommen da nicht herum bevor wir den Umgang mit afghanischen Ortskräften beschämend finden. Der Nachruf auf eine eisenbahnorientierte Sendung des Fernsehens bietet ausgiebig Gelegenheit zum Abschweifen, wir nutzen siei ausgiebig. Ein Hörtipp mit Triggerwarnung und zwei Sehtipps gibts's idann noch vor dem WTF zum Sendungsfinale. Hör(t) doch mal zu, Frank, Paula und Sara Aufgenommen am 1. September 2025, veröffentlicht am 2. September 2025, Hörzeit: 2:19:00

rotz + wasser
Die rotz+wasser - Morningshow - Folge 139: Ist das eigentlich noch Content ?

rotz + wasser

Play Episode Listen Later Sep 2, 2025 10:10


...außerdem, wichtige Informationen für unsere Hörer!!

rotz + wasser
Die rotz+wasser - Morningshow - Folge 138: Der neue Abschnitt der A 100

rotz + wasser

Play Episode Listen Later Sep 1, 2025 20:49


Benjamin berichtet von seinem Megamarsch. Außerdem meldet sich nach einiger Zeit die A100 wieder zu Wort. Einer von unseren beiden Podcaster ist die Strecke auch schon abgefahren.

rotz + wasser
Die rotz+wasser - Morningshow - Folge 133: Das Veto-Spiel (Teil 2)

rotz + wasser

Play Episode Listen Later Aug 27, 2025 16:28


Olli bringt mal wieder die Wochentage durcheinander, es gibt etwas Gymnastic und außerdem stellen die Drei mal wieder ihre mangelnde Allgemeinbildung unter Beweis... Nullmeridian oder Äquator ?? Ach egal, viel Spaß...

Das Interview von MDR AKTUELL
Berlin: Deutschlands teuerster Autobahnabschnitt ist freigegeben

Das Interview von MDR AKTUELL

Play Episode Listen Later Aug 27, 2025 5:03


Zwölf Jahre lange wurde in Berlin gebaut und die Kosten explodierten. 721 Millionen Euro wurden am Ende für einen 3,2 Kilometer langen Autobahnabschnitt der A100 ausgegeben. Wie konnte das passieren?

rotz + wasser
Die rotz+wasser - Morningshow - Folge 121: Grabsteinsprüche

rotz + wasser

Play Episode Listen Later Aug 15, 2025 16:36


Eine weitere Wundertüte an illustren Themen entfaltet sich in der heutigen Morningshow! Neben dem Tagebuch der A100 kehrt eine alte, unbeliebte Rubrik zurück und dann überlegen wir uns, welche Sprüche auf unseren Grabsteinen stehen sollen.

rotz + wasser
Die rotz+wasser - Morningshow - Folge 120: Paradoxa

rotz + wasser

Play Episode Listen Later Aug 14, 2025 13:16


Olli & Thomas auch heute am wieder gemeinsam am Start! Ihr erfahrt neues von der A100 und Olli möchte über Paradoxen sprechen.

rotz + wasser
Die rotz+wasser - Morningshow - Folge 119: Wenn heut am Mittwoch Donnerstag wär

rotz + wasser

Play Episode Listen Later Aug 13, 2025 14:00


Wie wird Unterwäsche eigentlich verrückt? Und wie würde die A100 klingen, wenn sie sprechen könnte? Diese und andere Themen bietet wie immer nur die Morningshow aus dem Hause rotz+wasser!

rotz + wasser
Die rotz+wasser - Morningshow - Folge 115: Käselauchsuppe

rotz + wasser

Play Episode Listen Later Aug 9, 2025 14:59


Service: Kochen mit Olli Spannung: Konnte Benjamnin sich befreien ? ...und eine Programmtipp zum Wochenende! Was will man mehr ?

rotz + wasser
Die rotz+wasser - Morningshow - Folge 114: Wenn ihr jeden Tag nur 5 Min.....

rotz + wasser

Play Episode Listen Later Aug 8, 2025 14:39


Mit tollen Tipps, versucht Benajmin, Ollis Lebensqualität zu verbessern, und auch wenn es nicht leicht ist, lässt Olli sich tatsächlich von ihm heraus locken...

rotz + wasser
Die rotz+wasser - Morningshow - Folge 108: Herbergsvater

rotz + wasser

Play Episode Listen Later Aug 2, 2025 17:06


Im Hauptstadtstudio ist alles wie immer: Von Olli keine Spur, die A100 ein Fiasko und Benjamin & Thomas waren auch schon mal lustiger!

rotz + wasser
Die rotz+wasser - Morningshow - Folge 101: Es macht immer Tuut! Tuut!

rotz + wasser

Play Episode Listen Later Jul 26, 2025 24:08


Unsere 100. Folge ist kaum gelaufen, da sind wir schon wieder mittendrin und gehen unserem Tagewerk nach. Benjamin und Thomas sprechen

The top AI news from the past week, every ThursdAI

What a WEEK! Qwen-mass in July. Folks, AI doesn't seem to be wanting to slow down, especially Open Source! This week we see yet another jump on SWE-bench verified (3rd week in a row?) this time from our friends at Alibaba Qwen. Was a pleasure of mine to host Junyang Lin from the team at Alibaba to come and chat with us about their incredible release with, with not 1 but three new models! Then, we had a great chat with Joseph Nelson from Roboflow, who not only dropped additional SOTA models, but was also in Washington at the annocement of the new AI Action plan from the WhiteHouse. Great conversations this week, as always, TL;DR in the end, tune in! Open Source AI - QwenMass in JulyThis week, the open-source world belonged to our friends at Alibaba Qwen. They didn't just release one model; they went on an absolute tear, dropping bomb after bomb on the community and resetting the state-of-the-art multiple times.A "Small" Update with Massive Impact: Qwen3-235B-A22B-Instruct-2507Alibaba called this a minor refresh of their 235B parameter mixture-of-experts.Sure—if you consider +13 points on GPQA, 256K context window minor. The 2507 drops hybrid thinking. Instead, Qwen now ships separate instruct and chain-of-thought models, avoiding token bloat when you just want a quick answer. Benchmarks? 81 % MMLU-Redux, 70 % LiveCodeBench, new SOTA on BFCL function-calling. All with 22 B active params.Our friend of the pod, and head of development at Alibaba Qwen, Junyang Lin, join the pod, and talked to us about their decision to uncouple this model from the hybrid reasoner Qwen3."After talking with the community and thinking it through," he said, "we decided to stop using hybrid thinking mode. Instead, we'll train instruct and thinking models separately so we can get the best quality possible."The community felt the hybrid model sometimes had conflicts and didn't always perform at its best. So, Qwen delivered a pure non-reasoning instruct model, and the results are staggering. Even without explicit reasoning, it's crushing benchmarks. Wolfram tested it on his MMLU-Pro benchmark and it got the top score of all open-weights models he's ever tested. Nisten saw the same thing on medical benchmarks, where it scored the highest on MedMCQA. This thing is a beast, getting a massive 77.5 on GPQA (up from 62.9) and 51.8 on LiveCodeBench (up from 32). This is a huge leap forward, and it proves that a powerful, well-trained instruct model can still push the boundaries of reasoning. The New (open) King of Code: Qwen3-Coder-480B (X, Try It, HF)Just as we were catching our breath, they dropped the main event: Qwen3-Coder. This is a 480-billion-parameter coding-specific behemoth (35B active) trained on a staggering 7.5 trillion tokens, with a 70% code ratio, that gets a new SOTA on SWE-bench verified with 69.6% (just a week after Kimi got SOTA with 65% and 2 weeks after Devstral's SOTA of 53%

Explora Commodore Retrokiosko
Retrokiosko #61 - Presentación de Randoom: Ancient Stones de Dozznar

Explora Commodore Retrokiosko

Play Episode Listen Later Jul 20, 2025 204:34


En este programa hacemos un repaso a algunas noticias de la actualidad commodoriana y a los lanzamientos de las últimas semanas, y vemos la revista INPUT Commodore 21 del verano de 1987. Antes de ello, Dozznar presentará su nuevo Randoom: Ancient Stones. Todo esto lo veremos con el equipo habitual formado por David Asenjo (https://twitter.com/darro99), Toni Bianchetti (https://twitter.com/seuck), Narciso Quintana "Narcisound" (https://twitter.com/narcisound), Jonatan Jiménez (https://twitter.com/jsabreman) y Paco Herrera (https://twitter.com/pacoblog64). Las noticias comentadas son: - Presentación en exclusiva del Randoom - Ancient Stones. - Nueva web oficial de la marca Commodore: https://www.commodore.net/ - Vuelve Compute!'s Gazette: https://www.computesgazette.com/ https://www.computesgazette.com/Digital/Volume1/GAZETTE%20JULY%202025%20Free%20Edition.pdf - Nuevo Ejournal por parte de Vinny a traves de FREEZE64: https://freeze64.com/ - Colaboración y novedades en Games That Weren't64: https://www.gamesthatwerent.com/recent-updates/ - Presentación de Commodore Odyssey (1977-1985) en el 3er congreso internacional DigraMX en Méjico: https://www.youtube.com/live/_qLWClH7NKs - Nueva versión del Penultimate para VIC-20: http://blog.tynemouthsoftware.co.uk/2025/07/vic20-turbo-wedge.html - Remute 2, nuevo album musical para Amiga: https://remute.bandcamp.com/album/remute-commodore-amiga-music-album - Nuevo reemplazo de Kickstart ROM disponble: KickSmash32: https://github.com/cdhooper/kicksmash32 - Abierto el preorder del Immortal Joystick V2: https://www.immortaljoysticks.co.uk/product/immortal-joystick/ - Nuevo reemplazo para el chip VIC: PiVIC: https://sleepingelephant.com/ipw-web/bulletin/bb/viewtopic.php?t=11315 - El sobrino de Sir Sinclair crea una consola chinesca : https://www.grantsinclair.com/gamercard https://gagadget.es/658248-gamercard-una-inusual-consola-retro-del-sobrino-del-creador-del-iconico-zx-spectrum-ha-sido-revelada/ - Nuevo video del cartucho Chat64: https://www.youtube.com/watch?v=k7gfhEC5BWg&themeRefresh=1 - Nuevo video con Larry Owens, distribuidor de Amiga en Japón: https://www.youtube.com/watch?v=v_EHXUzDUkk - Amiga 1200 External USB Keyboard Plug Mount 3D printable: https://dennisbusch-de.itch.io/amiga-1200-external-usb-keyboard-plug-mount-3d-printable - Versión física del Dr. Dangerous a la venta, con extras!: https://www.amigashop.org/product_info.php?products_id=436&language=en - A la venta nueva línea de merchandising celebrando el 40 aniversario del Amiga: https://www.amigashop.org/index.php?cPath=52 - Tercera campaña del Spectrum Next: https://www.kickstarter.com/projects/spectrumnext/zx-spectrum-next-issue-3-0 Actualizaciones comentadas: - SID Known V1.29: https://csdb.dk/release/?id=254223 - Pixcen V0.7.0.50: https://csdb.dk/release/?id=254135 - Ultimate64/II - Firmware version 3.12a: https://ultimate64.com/Firmware - Amiberry lite v5.8.10: https://github.com/BlitterStudio/amiberry/releases - Amiga Forever v11: https://www.amigaforever.com/news-events/af-11 - WINUAE 6.0.0: https://www.winuae.net/ Los juegos y programas nuevos comentados son: - The catacombs of cherubim (Natthrafn, C64): https://natthrafn.itch.io/the-catacombs-of-cherubim - Vixel (Biblioteca en TurboRascal) (Hewco, Vic-20): https://hewco.itch.io/vixel - Mad Moggy (VisualImpact, C64): https://visualimpact.itch.io/mad-moggy - Space Patrol (jojo073/Narcisound, Amiga): http://jojo073.es/pages/space_patrol.html - Jump Bird 2025 (natthrafn, C64): https://natthrafn.itch.io/jump-bird-2025 - Space Invaders 2 (jimbo, C64): https://jimbo.itch.io/space-invaders-2-c64 - Hundra (Amiga Factory, Amiga): https://amiga-factory.itch.io/hundra - Krogharr (Tigerskunk, Amiga): https://tigerskunk.itch.io/krogharr - Le Basi della Programmazione (Guía de programación para Amiga) (retream, Amiga): https://retream.itch.io/le-basi-della-programmazione - Xenomorph (juande3050, Amiga). - SwitcherBoy (Tecniman, Amiga). - A100 (cobour, Amiga): https://cobour.itch.io/a100 - Box 512 (Iceout, C64): https://csdb.dk/release/?id=254276 - Tribbles 2025 (Anders Persson, VIC20): https://boray.se/commodore/tribbles.html; https://www.youtube.com/watch?v=bh9WRIlpivY - La Casa (TSM, C64 DTV): https://csdb.dk/release/?id=254050 - Gyruss (shoestring, MEGA-65): https://github.com/sho3string/GyrussMEGA65_R3_R6 - BASIC V3.5 (Herramienta) (radius75, C64): https://csdb.dk/release/?id=253991 - Commando B65 (SirGeldi, MEGA-65): https://files.mega65.org?id=7f3d1007-6250-44e4-86cb-93444ae669b5 - Galaxian (shoestring, MEGA-65): https://github.com/sho3string/GalaxianMEGA65_r3_r6/

rotz + wasser
Die rotz+wasser - Morningshow - Folge 90: Anrufer ohne Ende!

rotz + wasser

Play Episode Listen Later Jul 15, 2025 11:29


Cambiare tutto con le azioni ETF investimenti risparmio finanza personale business soldi economia
The Deep Dive: Nvidia oltre i 4.000 miliardi ! Troppo cara o affare da non perdere ? Analisi e dibattito aperto tra esperti.

Cambiare tutto con le azioni ETF investimenti risparmio finanza personale business soldi economia

Play Episode Listen Later Jul 9, 2025 10:05


Benvenuti a The Deep Dive! Esploriamo il fenomeno Nvidia, l'azienda che ha superato la storica soglia dei 4.000 miliardi di dollari, un primato mai visto prima che la proietta tra le superpotenze globali, sorpassando persino Apple e Microsoft in valore e modello di business.Questa ascesa record è trainata dal suo ruolo di "cervello" dell'intelligenza artificiale, con le sue GPU H100 e A100 che sono diventate lo standard industriale per i laboratori e le grandi aziende. Il boom dell'AI generativa e la domanda incessante dei colossi del cloud computing (gli "hyperscaler" come Amazon Web Services, Google Cloud, Microsoft Azure e Meta) hanno alimentato questa crescita esponenziale.Ma è Nvidia un affare imperdibile per il futuro o siamo di fronte a una bolla? Analizzeremo i numeri: con un rapporto prezzo/utili (P/E forward) a 33, l'azienda "sembra ancora conveniente", e il 90% degli analisti ha un rating "BUY". Discuteremo come gli investimenti previsti dalle big tech (oltre 1.000 miliardi di dollari in infrastrutture AI tra il 2025 e il 2028) pongano Nvidia in una posizione chiave per un'ulteriore crescita.Non mancheranno però i segnali di allarme. Discuteremo i rischi da considerare: la volatilità del settore tecnologico, la potenziale dipendenza da un numero limitato di clienti e le pressioni geopolitiche globali. Unisciti a noi per capire se, come suggeriscono le fonti, "la corsa è appena cominciata" o se gli investitori dovrebbero iniziare a guardare con cautela. Analisi e dibattito aperto tra esperti.LINK DIRETTO DEL MIO LIBRO SU AMAZON: https://www.amazon.it/dp/B0D6LZK23MInvesti come me:https://www.patreon.com/cambiaretutto Il sito di giuseppe scioscia: https://tinyurl.com/ytm3ns74Il gruppo:https://www.facebook.com/groups/cambiaretuttocambiaresubitoIl mio profilo:https://www.facebook.com/GiuseppeSciosciaNB: In nessun modo il mio contenuto audio e/o video vuole essere una sollecitazione all'acquisto o vendita di strumenti finanziari.

rotz + wasser
Die rotz+wasser - Morningshow - Folge 55: Manipulation

rotz + wasser

Play Episode Listen Later Jun 10, 2025 15:24


Thomas ist wieder zurück. Und er hat Feedback von euch mitgebracht. Außerdem gibt es Neuigkeiten von der A100. Und diesmal haben wir sogar Bahn- News!

rotz + wasser
Die rotz+wasser - Morningshow - Folge 53: Versuchungen und Sternschnuppen

rotz + wasser

Play Episode Listen Later Jun 8, 2025 16:19


Olli ist doch wieder zu besuch. Benjamin klärt ihn auf, dass es eine menge Kritik gab.

rotz + wasser
Die rotz+wasser - Morningshow - Folge 51: Headset

rotz + wasser

Play Episode Listen Later Jun 6, 2025 14:55


Es geht nach dem Jubiläum natürlich wieder um die A100 und das Wetter. Ein alter Bekannter tritt auf! Dieser hat kein Geld gespart und sich für diese Aufnahme ein neues Headset gekauft.

CBS This Morning - News on the Go
How A "Reverse Bucket List" Can Boost Your Mental Health

CBS This Morning - News on the Go

Play Episode Listen Later May 30, 2025 33:13


Lisa Seigies, president and CEO of Variety Wholesalers, which purchased Big Lots after it filed for bankruptcy last year, speaks to "CBS Mornings" about reopening stores and the impact of President Trump's tariffs."CBS Sunday Morning" correspondent David Pogue says he was the only non-space journalist invited to interview Elon Musk on Tuesday ahead of SpaceX's ninth test flight of the Super Heavy-Starship rocket. Pogue says Musk initially said "we're going to stick to talking about spaceships" before he began discussing the Trump administration. Watch more of Pogue's interview with Musk this Sunday, only on "CBS Sunday Morning." Defense lawyers for Karen Read will present their case after the prosecution rested in the retrial on Thursday. Read is accused of hitting John O'Keefe — her boyfriend and a Boston police officer — with her car in 2022, and leaving him to die. Dr. Sue Varma joins "CBS Mornings" to share insights from her new book, "Practical Optimism," where she encourages people to reflect on what they've already accomplished in life. The "reverse bucket list" can help build gratitude and emotional resilience. As part of AAPI Heritage Month, Mike Van, the first Vietnamese-American CEO of Billboard, joins "CBS Mornings" to reflect on his passion for music, culture, and representation. He is one of this year's honorees on Gold House's influential A100 list. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Master Your Energy
#119 - Stop met jezelf afbeulen. Kies voor een ochtendroutine die wel werkt - voor jou.

Master Your Energy

Play Episode Listen Later May 14, 2025 23:36


Het moet niet gekker worden met die popi jopi ochtendroutines!Denk jij ook dat super vroeg opstaan je automatisch super succesvol maakt? Nou...Zo'n 4AM-club klinkt absoluut ambitieus, maar het grote risico is slaaptekort. Zonder voldoende slaap trek je jezelf langzaam leeg, en dat voel je waarschijnlijk pas wanneer het te laat is.Slaap is geen luxe. Het is essentieel voor je gezondheid en je energie > en dus voor je succes.Je hoort het overal: “Volg dit ochtendritueel en je leven wordt perfect.”Maar ik geloof niet in HET perfecte ochtendritueel. Wat voor de één een krachtig begin van de dag is, werkt voor de ander als een mega energielek.In deze aflevering van de Master Your Energy Podcast leg ik uit waarom HET perfecte ochtendritueel niet bestaat, en hoe je ontdekt wat voor jouw energietype wel goed werkt. We duiken in het belang van slaap, van échte hersteltijd, en ik neem je mee langs de vijf verschillende energietypes. Want ieder type heeft iets anders nodig om goed te starten.Dus laat je niet gek maken door strenge ochtendrituelen die je mogelijk meer kosten dan opleveren. Kies voor energie. Voor flow. Voor wat écht bij jou past.In deze aflevering neem ik je mee in:✅ Waarom slaap geen luxe is, maar het fundament van je energie en gezondheid.✅ Wat er echt gebeurt met je brein, je hormonen, je immuunsysteem en je energie, als je te weinig slaapt.✅ De grootste misvatting over populaire ochtendroutines.✅ Waarom één zelfde ochtendritueel voor iedereen, een slecht idee is.✅ Wat jouw energietype nodig heeft om de dag goed te starten.✅ Hoe je een ochtendritueel bouwt dat bij jou past en je energie versterkt.Gun jezelf die ideale start van de dag.Experimenteer, speel, en ga uitvinden wat bij jou en jouw energie past. Want succes begint niet met vroeg opstaan, maar met de keuze: energy first! Shownotes:www.jeaninehofs.nl/podcast/a119Linken bij deze aflevering: Wat is jouw energie type? Doe HIER gratis de test.Werken met Jeanine.Lees mijn boek 'Blijvend meer energie volgens de vijf elementen'.A48 - Over vrouwelijke en mannelijke energie, feminien en masculien, yin en yang.A99 - Vrouwelijke en mannelijke energie; ben jij in balans?A100 - 10 signalen dat je meer in je vrouwelijke energie - YinPower - mag gaan staan.A118 - Wat niemand je vertelt over je energie (zelfs je huisarts niet), maar je wél moet weten.Mail mij op: info@jeaninehofs.nlConnect met mij op InstagramConnect met mij op LinkedIn

The MM+M Podcast
2025 MM+M Pinnacle preview: UCB's immunology lead Camille Lee

The MM+M Podcast

Play Episode Listen Later May 7, 2025 45:16


Over the course of 30 years, in a variety of marketing, medical and leadership roles, Camille Lee has demonstrated an ability to drive transformative growth across diverse therapeutic areas.For this week's episode, Lee – who serves as SVP, U.S. head of immunology at UCB – previews the 2025 MM+M Pinnacle Awards and reflects on what it means to receive the career achievement honor.After that, managing editor Jack O'Brien and reporter Heerea Rikhraj recap the HealthFront hosted by Publicis Health Media last week. Then, editor-in-chief Jameson Fleming joins the show to play one of the board games submitted for the 2025 MM+M Agency 100, which is just over a month away from going live. Check us out at: mmm-online.com Follow us: YouTube: @MMM-onlineTikTok: @MMMnewsInstagram: @MMMnewsonlineTwitter/X: @MMMnewsLinkedIn: MM+M To read more of the most timely, balanced and original reporting in medical marketing, subscribe here.

Radio Spaetkauf Berlin
Hardcore Tempelhof | THF100, Architects for Tempelhofer Feld, Sebastian Thauer

Radio Spaetkauf Berlin

Play Episode Listen Later Apr 9, 2025 84:58


Recorded April 3, 2025 live at Another Country Bookstore with hosts Izzy and Dan, plus special guest host Antonia Bär. Headlines include the troubling escalations of neo-Nazi marches in Berlin, cracks in the A100, a local CDU "scandal", plus a Berliner in space and more. Interviews with Ester of 100% Tempelhofer Feld and Jolene, from Architects for Tempelhofer Feld, and Sebastian Thauer, local show promoter who recently co-founded the Cake Walk music festival. Thanks to Vanta for support! GUEST LINKS  Architects for Tempelhofer Feld https://architects4thf.com/ https://www.instagram.com/architects4thf/ Open Letter: https://forms.gle/CsKnCtd2vr6KqtwKA Mailing List: https://forms.gle/kbDDrbk7sTmKAwNc9 THF 100 https://www.thf100.de/  https://www.instagram.com/thf100 Cakewalk Festivalhttps://www.instagram.com/tangiblematerial https://www.instagram.com/cakewalkfest https://malzfabrik.de Antonia Bär Shows: It's That Time of the Month:  https://www.comedycafeberlin.com/event/its-that-time-of-the-month-35/ Improvised Stand-Up: https://www.comedycafeberlin.com/event/the-improvised-stand-up-show-13/ ★ Thanks to Vanta for their support, learn more at:Vanta.com/RadioSpaetkauf ➡ Vinyl Kickstarter, NOW LIVE!: https://www.kickstarter.com/profile/radiospaetkauf Technical Support: podfestberlin.com for technical support. Editing: Kaleb Wentzel-Fisher https://www.recordedvoices.com Thank you to our listeners, if you would like to make a donation or support us through a steady membership: www.radiospaetkauf.com/donate More Radio Spaetkauf: www.radiospaetkauf.com

Radio Spaetkauf Berlin
Spring in Berlin | Guests Joanna Kusiak & Fabian Flues

Radio Spaetkauf Berlin

Play Episode Listen Later Mar 10, 2025 80:35


Recorded live March 6, 2025 at Another Country Bookstore. Hosts Izzy and Dan with special  guest co-hosts Pip Roper of the "History Flakes" podcast, and comedian Toby Arsalan. They chat spring weather in Berlin and its impact on relationships, Germany's new electronic health records, Berlin labor strikes, local Football and various tree removals. Plus the annoucement of a Kickstarter campaign for a vinyl release of our mini-series "How to F#€K Up an Airport," Berlin's housing crisis, and more.  Interviews with Fabian Flues, a member of the Burger*innenInitiative A100, and Joanna Kusiak, a sociologist at Cambridge University and author of "Radically Legal: Berlin Constitutes the Future." Joanna Kusiak: Read the book open source: PDF LINK or listen on Audible: https://www.audible.de/pd/Radically-Legal-Hoerbuch/B0D8WSZ8QF Pip Roper: Live History Flakes Podcast Recording 29th March at Comedy Cafe Berlin https://www.comedycafeberlin.com/event/history-flakes-live-2/ Toby Arsalan: tobyarsalan.com or on insta at @tobyarsalan Fabian Flues: Bürger*innenInitiative A100 https://bi-a100.de/ Thanks to Vanta for their support, Learn more at: www.Vanta.com/RadioSpaetkauf Radio Spaetkauf: www.radiospaetkauf.com Vinyl Kickstarter, LAUNCHING SOON !: https://www.kickstarter.com/projects/radiospaetkauf Additional thanks to podfestberlin.com for technical support. And to our home for the recording, Another Country Bookstore http://www.anothercountry.de/   and of course to you the listeners, if you would like to make a donation or support us through a steady membership:  www.radiospaetkauf.com/donate

Marketing Leadership Podcast: Strategies From Wise D2C & B2B Marketers
Advanced B2B Performance Marketing Strategies That Deliver Real Growth

Marketing Leadership Podcast: Strategies From Wise D2C & B2B Marketers

Play Episode Listen Later Feb 13, 2025 47:34


Clark Johannson, President and CEO of ClickSpace, board member at Young Presidents' Organization (YPO), and Director at A100, shares his expertise on advanced B2B performance marketing strategies, emphasizing the importance of unit economics, marketing intelligence and customer insights. Through decades of entrepreneurial experience, Clark provides actionable insights into revenue predictability, marketing efficiency and the mindset shift required for true growth marketing success.Key Takeaways:(02:49) The critical role of customer intimacy in startups to make informed decisions due to limited resources.(05:19) The struggle with predictable revenue in marketing due to lack of marketing visibility and misallocation of resources.(06:30) Why marketing tracking is essential because without it, businesses waste time and money on ineffective marketing channels.(16:04) The importance of marketing unit economics for sustainable growth to balance customer acquisition cost (CAC) and lifetime value (LTV) for financial viability.(24:48) Differentiating inbound and outbound marketing strategies.(28:32) Applying the J-Curve to marketing investment because marketing investments often see an initial dip before reaching profitability.(41:07) Marketers should learn from failed campaigns instead of focusing on vanity metrics.Resources Mentioned:Young Presidents' Organization (YPO) website - https://www.ypo.org/ A100 website - https://thea100.org/The Innovator's Dilemma by Clayton Christensen - https://www.amazon.com/Innovators-Dilemma-Technologies-Management-Innovation/dp/1633691780Harvard Business Review – "Jobs to Be Done" Theory - https://hbr.org/2016/09/know-your-customers-jobs-to-be-doneGoogle Ads - https://ads.google.com/Looker Studio (Google Data Studio) - https://lookerstudio.google.com/HubSpot - https://www.hubspot.com/Freshdesk - https://freshdesk.com/ Productboard - https://www.productboard.com/ PipeDrive - https://www.pipedrive.com/Insightful Links:Know Your Customers' “Jobs to Be Done” - https://hbr.org/2016/09/know-your-customers-jobs-to-be-doneWhat Is Market Intelligence? - https://www.businessnewsdaily.com/4697-market-intelligence.htmlHow to Calculate Unit Economics for Your Business - https://www.masterclass.com/articles/how-to-calculate-unit-economics-for-your-businessThanks for listening to the “Marketing Leadership” podcast, brought to you by Listen Network. If you enjoyed this episode, leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#PodcastMarketing #PerformanceMarketing #BrandMarketing #MarketingStrategy #MarketingIntelligence #GTM #B2BMarketing #D2CMarketing #PodcastAds

• El siglo 21 es hoy •
DeepSeek en profundidad

• El siglo 21 es hoy •

Play Episode Listen Later Feb 10, 2025 97:07


El ascenso de DeepSeek, una inteligencia artificial de código abierto que ha generado cambios en el mercado tecnológico global. Hablamos de sus diferencias con otras IA como ChatGPT, Claude, Gemini, Grok y Mistral, su impacto en empresas como NVIDIA, así como su relación con las restricciones de exportación de chips H100, A100 y H800. También exploramos la paradoja de Jevons y su aplicación a tecnologías emergentes. Descubre por qué DeepSeek es el foco de tensiones geopolíticas entre Estados Unidos y China.Suscríbete, comparte este episodio y escucha a 1.5x para una experiencia mejorada.Capítulos: 00:00:00 Episodio 155100:02:39 El día en que DeepSeek se hizo noticia mundial00:17:19 Todos hablan de DeepSeek00:25:55 El mercado de valores00:31:13 El impacto de la App00:35:43 De que vive DeepSeek00:39:11 La paradoja de Jevons00:55:09 DeepSeek y la seguridad00:59:06 De dónde viene01:06:51 DeepSeek en local01:10:07 Los chips01:12:54 Las restricciones01:14:46 DeepSeek avanza01:20:27 Qwen Alibaba01:26:24 Rumores de que usaron ChatGPT01:29:22 ¿Qué otros modelos de IA son open source?01:32:43 Guerra fría IADeepSeek, inteligencia artificial, código abierto, ChatGPT, Claude, Gemini, Grok, Mistral, NVIDIA, H100, A100, H800, Jevons, geopolítica, Estados Unidos, China, tecnología emergente, mercado tecnológico.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/el-siglo-21-es-hoy--880846/support.

El Siglo 21 es Hoy
DeepSeek en profundidad

El Siglo 21 es Hoy

Play Episode Listen Later Feb 10, 2025 97:07


El ascenso de DeepSeek, una inteligencia artificial de código abierto que ha generado cambios en el mercado tecnológico global. Hablamos de sus diferencias con otras IA como ChatGPT, Claude, Gemini, Grok y Mistral, su impacto en empresas como NVIDIA, así como su relación con las restricciones de exportación de chips H100, A100 y H800. También exploramos la paradoja de Jevons y su aplicación a tecnologías emergentes. Descubre por qué DeepSeek es el foco de tensiones geopolíticas entre Estados Unidos y China.Suscríbete, comparte este episodio y escucha a 1.5x para una experiencia mejorada.Capítulos: 00:00:00 Episodio 155100:02:39 El día en que DeepSeek se hizo noticia mundial00:17:19 Todos hablan de DeepSeek00:25:55 El mercado de valores00:31:13 El impacto de la App00:35:43 De que vive DeepSeek00:39:11 La paradoja de Jevons00:55:09 DeepSeek y la seguridad00:59:06 De dónde viene01:06:51 DeepSeek en local01:10:07 Los chips01:12:54 Las restricciones01:14:46 DeepSeek avanza01:20:27 Qwen Alibaba01:26:24 Rumores de que usaron ChatGPT01:29:22 ¿Qué otros modelos de IA son open source?01:32:43 Guerra fría IADeepSeek, inteligencia artificial, código abierto, ChatGPT, Claude, Gemini, Grok, Mistral, NVIDIA, H100, A100, H800, Jevons, geopolítica, Estados Unidos, China, tecnología emergente, mercado tecnológico.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/el-siglo-21-es-hoy--880846/support.

The Association 100 Podcast
Raising the Bar: Celebrating the 2024 A100 CommImpact Award Winners

The Association 100 Podcast

Play Episode Listen Later Dec 25, 2024 19:45


In this special year-end episode of The A100 Podcast, we're re-airing the live broadcast of the 2024 A100 CommImpact Awards ceremony, originally streamed on YouTube Live. Hosted by Colleen Gallagher, CEO of OnWrd & UpWrd, and Meghan Henning, Senior PR Strategist and Founding Partner, this event celebrated the exceptional achievements of associations in communications and engagement. Key Highlights: Best Media Relations Campaign: American Association for Marriage and Family Therapy's impactful mental health campaign garnered over 100 high-profile media placements. Outstanding Internal Communications: Tennessee Concrete Association's creative board engagement initiatives, including Be Pro Be Proud Tennessee and Skate4Concrete. Effective Member Engagement Initiative: BSA | The Software Alliance's Why AI? campaign showcased how modest budgets can drive global advocacy. Innovative Content Strategy: American Society of Civil Engineers' use of peer-reviewed research to align with UN Sustainable Development Goals. Thought Leadership and Research: National Association for Law Placement's Jobs & JDs report offered deep insights into legal employment trends and racial disparities, driving equity-focused conversations in the sector. Advocacy Excellence: Muscular Dystrophy Association's #AccessibleAirTravel campaign led to legislative wins in air travel accessibility. Event Promotion Excellence: Sea Tow Foundation's virtual Life Jacket Loaner Conference expanded its safety program nationwide. Leadership in Public Awareness Campaign: American Medical Association's mifepristone access campaign drove national discourse and secured a significant Supreme Court victory. Join us in celebrating these associations for their creativity, leadership, and impact. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes that celebrate the innovation and achievements of associations shaping the future.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's

每天五分钟,基金定投聊通透
【基金知识】A50、A100、A500新宽基指数详解

每天五分钟,基金定投聊通透

Play Episode Listen Later Nov 7, 2024 9:20


02:07 A50、A100和A500指数:覆盖范围、行业分布和投资价值04:08 A系列指数:中大盘风格中的细分行业龙头覆盖,成分股暴雷风险大幅下降06:13 上证50与A50:稳定分红回报的首选指数投资08:15 我的投资考量:在A500开放申购后,停止定投沪深300并转向A500指数

The Association 100 Podcast
Leading with Agility: Thriving Amidst Rapid Industry Shifts

The Association 100 Podcast

Play Episode Listen Later Oct 9, 2024 36:09


In this insightful A100 video interview, Bennie F. Johnson, CEO of the American Marketing Association, shares his dynamic vision for the future of marketing. Bennie discusses how AMA serves as a global hub for marketing professionals, supporting everyone from students to executives. He explores key topics like AI, data-driven decision making, and the critical role of community in shaping the marketing profession. Key Highlights: Embracing Disruptive Change: Bennie emphasizes the importance of welcoming innovation and leveraging marketing's evolving tools to drive productive change, especially in an AI-driven landscape. Fostering Community: He explains how AMA creates spaces for marketers across all sectors to collaborate, learn, and grow, emphasizing that communities are more essential than ever in a digital-first world. Ethical Responsibility in Marketing: Bennie also touches on the growing need for transparency and ethical practices in marketing, as consumers demand more accountability from brands. Join us as Bennie Johnson shares his forward-thinking approach to marketing leadership, offering essential insights for association professionals navigating change and building stronger communities. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes packed with expert insights and innovative strategies to help your association embrace change and lead with impact!

The MM+M Podcast
The A100 Playbook Podcast | Syneos Health Communications: Creating a growth-first culture, a podcast sponsored by Syneos Health Communications

The MM+M Podcast

Play Episode Listen Later Sep 18, 2024 22:31


Jeanine O'Kane is president of Syneos Health Communications — a portfolio of agencies spanning advertising, public relations, patient advocacy, medical communications, managed markets and naming and branding. Formerly president of the US public relations group at Syneos Health Communications, Jeanine has been with the organization for more than a decade and has more than 20 years of industry experience. During her tenure at Syneos Health Communications, she has been instrumental in developing award-winning communications programs and has helped integrate communications and commercial expertise into clinical development, unlocking innovative solutions to deliver life-saving therapies to patients worldwide. Jeanine was named President in April 2023. Since assuming this role, she has been steadfast in her commitment to creating a culture of growth that is rooted in innovation. Read the company's profile here. Check us out at: mmm-online.com Follow us: YouTube: @MMM-onlineTikTok: @MMMnewsInstagram: @MMMnewsonlineTwitter/X: @MMMnewsLinkedIn: MM+M To read more of the most timely, balanced and original reporting in medical marketing, subscribe here.

The Association 100 Podcast
Listening and Leading: Effective Comms, Advocacy, and Membership Strategies

The Association 100 Podcast

Play Episode Listen Later Aug 28, 2024 35:47


Welcome back to The A100 podcast! In this episode, host Colleen Gallagher sits down with Sean Luechtefeld, Ph.D., CAE, Vice President of Membership & Communications at ANCOR (American Network of Community Options and Resources). Sean returns to the podcast to share valuable insights into the unique challenges and strategies in the world of association management, particularly in the areas of membership and communications. Key Highlights: Balancing Dual Roles: Sean discusses the complexities of balancing his dual roles in membership and communications, offering practical advice on how to prioritize tasks and manage a broad scope of responsibilities in a small-staff environment. He emphasizes the importance of surrounding yourself with a talented team and being realistic about what can be achieved. Advocacy and Active Listening: Sean highlights the significance of active listening in advocacy communications. He shares how ANCOR ensures that the voices of their members—organizations serving people with intellectual and developmental disabilities—are heard at the federal level. By understanding and acting on member feedback, ANCOR effectively supports its members in their advocacy efforts. Adapting Communications in a Changing Landscape: In a rapidly evolving media environment, Sean explains how ANCOR tailors its communications strategy to address ongoing challenges such as workforce recruitment and retention. He discusses the importance of seizing opportunities that arise from challenges and aligning messaging with current events and broader societal issues. Navigating Social Media Channels: Sean dives into the ongoing discussions within ANCOR about the best ways to engage with different social media platforms. He explores the challenges of adapting to changing user behaviors and the importance of focusing on platforms that offer the most value in reaching target audiences, such as members, lawmakers and journalists. Looking Ahead: Sean shares his thoughts on the future of membership engagement and communications within associations. He emphasizes the need to shift from selling membership benefits to promoting the overall experience of membership, creating tailored experiences that resonate with diverse member needs. Join us as Sean Luechtefeld offers actionable strategies and deep insights into effectively managing communications and membership in associations, making this episode a must-listen for association professionals. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes packed with actionable insights to help your association thrive!

The Nonlinear Library
LW - Unit economics of LLM APIs by dschwarz

The Nonlinear Library

Play Episode Listen Later Aug 28, 2024 3:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unit economics of LLM APIs, published by dschwarz on August 28, 2024 on LessWrong. Disclaimer 1: Our calculations are rough in places; information is sparse, guesstimates abound. Disclaimer 2: This post draws from public info on FutureSearch as well as a paywalled report. If you want the paywalled numbers, email dan@futuresearch.ai with your LW account name and we'll send you the report for free. Here's our view of the unit economics of OpenAI's API. Note: this considers GPT-4-class models only, not audio or image APIs, and only direct API traffic, not usage in ChatGPT products. As of June 2024, OpenAI's API was very likely profitable, with surprisingly high margins. Our median estimate for gross margin (not including model training costs or employee salaries) was 75%. Once all traffic switches over to the new August GPT-4o model and pricing, OpenAI plausibly still will have a healthy profit margin. Our median estimate for the profit margin is 55%. The Information implied that OpenAI rents ~60k A100-equivalents from Microsoft for non-ChatGPT inference. If this is true, OpenAI is massively overprovisioned for the API, even when we account for the need to rent many extra GPUs to account for traffic spikes and future growth (arguably creating something of a mystery). We provide an explicit, simplified first-principles calculation of inference costs for the original GPT-4, and find significantly lower throughput & higher costs than Benjamin Todd's result (which drew from Semianalysis). Summary chart: What does this imply? With any numbers, we see two major scenarios: Scenario one: competition intensifies. With llama, Gemini, and Claude all comparable and cheap, OpenAI will be forced to again drop their prices in half. (With their margins FutureSearch calculates, they can do this without running at a loss.) LLM APIs become like cloud computing: huge revenue, but not very profitable. Scenario two: one LLM pulls away in quality. GPT-5 and Claude-3.5-opus might come out soon at huge quality improvements. If only one LLM is good enough for important workflows (like agents), it may be able to sustain a high price and huge margins. Profits will flow to this one winner. Our numbers update us, in either scenario, towards: An increased likelihood of more significant price drops for GPT-4-class models. A (weak) update that frontier labs are facing less pressure today to race to more capable models. If you thought that GPT-4o (and Claude, Gemini, and hosted versions of llama-405b) were already running at cost in the API, or even at a loss, you would predict that the providers are strongly motivated to release new models to find profit. If our numbers are approximately correct, these businesses may instead feel there is plenty of margin left, and profit to be had, even if GPT-5 and Claude-3.5-opus etc. do not come out for many months. More info at https://futuresearch.ai/openai-api-profit. Feedback welcome and appreciated - we'll update our estimates accordingly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Association 100 Podcast
Elevating Member Experience with Personalization and Technology

The Association 100 Podcast

Play Episode Listen Later Aug 21, 2024 14:36


In this engaging episode of The A100 podcast, recorded live at ASAE's Annual Conference in Cleveland, host Meghan Henning sits down with Stephanie Denvir, MS, CAE, Chief Member Experience Officer at the American Society for Quality (ASQ). Stephanie shares her strategies for enhancing the member experience through personalization, community engagement and innovative technology. Key Highlights: Segmentation and Personalization: Stephanie discusses how ASQ is using segmentation to tailor messaging and provide personalized member experiences. By focusing on the unique needs of their diverse, global membership, ASQ is delivering targeted value to each member. Building Strong Communities: ASQ's robust network includes over 230 geographic communities and 26 technical communities. Stephanie highlights the launch of a new online community platform, which has significantly boosted member engagement by providing a space for members to connect, share insights and find volunteer opportunities 24/7. Leveraging Technology: Stephanie emphasizes ASQ's commitment to embracing technology. She shares how the implementation of a new AMS, personalized email campaigns, and an accessible conference tool that provides real-time translation in multiple languages are enhancing the member experience. ASQ is also exploring the potential of AI to further personalize content and meet member needs. Challenges and Opportunities Ahead: Looking forward, Stephanie discusses the challenges of integrating AI while ensuring data protection, and the importance of engaging the next generation of members. She stresses the need for association leaders to be open to change and to involve their boards and member leaders in the process. Join us as we dive into how ASQ is setting new standards for member engagement and leveraging technology to create a personalized, inclusive experience for all its members. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes packed with actionable insights to help your association thrive!

The Association 100 Podcast
Harnessing the Power of Community

The Association 100 Podcast

Play Episode Listen Later Aug 14, 2024 17:41


Harnessing the Power of Community Welcome back to The A100 podcast! In this episode, recorded live at ASAE's Annual Conference in Cleveland, O&U's Meghan Henning sits down with Greg Melia, CAE, CEO of the Customer Experience Professionals Association (CXPA). Greg shares how CXPA has leveraged the power of its global community to drive significant initiatives and maintain a strong, inclusive culture despite being a small-staff association. Key Highlights: Building the CX Book of Knowledge: Greg shares the inspiring story behind the creation of the CXPA's CX Book of Knowledge, a 322-page resource developed by 77 volunteers. This member-driven initiative has become a cornerstone for both new and seasoned customer experience professionals, providing invaluable insights, definitions and guidance. The success of this project highlights the power of community collaboration and the impact it can have on an association's reputation and value proposition. Maximizing Impact with a Small Staff: Despite having a small staff, CXPA has achieved remarkable results by uniting and motivating its members around shared goals. Greg discusses the importance of leveraging the strengths of the community, as demonstrated by the creation of 14 books following the CX Book of Knowledge. This approach not only enriches the member experience but also establishes CXPA as a trusted source of knowledge and leadership in the customer experience field. Inclusive Strategic Planning: Greg emphasizes the value of involving the broader CXPA community in strategic planning. Through a series of Zoom meetings, surveys and a steering committee, CXPA engaged over 1,200 members and non-members in shaping the organization's future. This inclusive approach has not only informed the association's direction but also cultivated a network of advocates committed to achieving CXPA's ambitious goals. Expanding Global Reach and Inclusivity: Under Greg's leadership, CXPA has grown from a small startup to a global organization with members in 70 countries. He discusses how the association has maintained a culture of inclusivity and belonging while expanding internationally, particularly through the use of virtual events and digital content. This shift, accelerated by the COVID-19 pandemic, has allowed CXPA to amplify diverse voices and ensure that all members feel represented and engaged. Navigating Emerging Trends in Customer Experience: Greg touches on the evolving nature of customer experience and the importance of staying ahead of trends. He explains how CXPA is preparing its members to meet these challenges by fostering collaboration and providing tailored resources. From digital communication channels to personalized customer interactions, Greg highlights the need for associations to be agile and responsive to the changing expectations of their members. Join us as Greg Melia offers valuable insights into how CXPA is harnessing the power of its global community to drive innovation, inclusivity and member value. Whether you're leading a small-staff association or looking to engage your members more effectively, this episode is packed with practical strategies and inspiration. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes filled with insights to help your association thrive!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Because of the nature of SAM, this is more video heavy than usual. See our YouTube!Because vision is first among equals in multimodality, and yet SOTA vision language models are closed, we've always had an interest in learning what's next in vision. Our first viral episode was Segment Anything 1, and we have since covered LLaVA, IDEFICS, Adept, and Reka. But just like with Llama 3, FAIR holds a special place in our hearts as the New Kings of Open Source AI.The list of sequels better than the originals is usually very short, but SAM 2 delighted us by not only being a better image segmentation model than SAM 1, it also conclusively and inexpensively solved video segmentation in just an elegant a way as SAM 1 did for images, and releasing everything to the community as Apache 2/CC by 4.0.“In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM).”Surprisingly EfficientThe paper reports that SAM 2 was trained on 256 A100 GPUs for 108 hours (59% more than SAM 1). Taking the upper end $2 A100 cost off gpulist.ai means SAM2 cost ~$50k to train if it had an external market-rate cost - surprisingly cheap for adding video understanding!The newly released SA-V dataset is also the largest video segment dataset to date, with careful attention given to scene/object/geographical diversity, including that of annotators. In some ways, we are surprised that SOTA video segmentation can be done on only ~50,000 videos (and 640k masklet annotations). Model-in-the-loop Data Engine for Annotations and Demo-first DevelopmentSimilar to SAM 1, a 3 Phase Data Engine helped greatly in bootstrapping this dataset. As Nikhila says in the episode, the demo you see wasn't just for show, they actually used this same tool to do annotations for the model that is now demoed in the tool:“With the original SAM, we put a lot of effort in building a high-quality demo. And the other piece here is that the demo is actually the annotation tool. So we actually use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation. and improve the data quality, and that will improve the model quality. With this approach, we found it to be really successful.”An incredible 90% speedup in annotation happened due to this virtuous cycle which helped SA-V reach this incredible scale.Building the demo also helped the team live the context that their own downstream users, like Roboflow, would experience, and forced them to make choices accordingly.As Nikhila says:“It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.I think it also really forces you to think about many things that you might postpone. For example, efficiency. For a good demo experience, making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about what kind of image encoder we want to use or other things. hardware efficiency improvements. So those kind of things, I think, become a first-class citizen when you put the demo first.”Indeed, the team swapped out standard ViT-H Vision Transformers for Hiera (Hierarchical) Vision Transformers as a result of efficiency considerations.Memory AttentionSpeaking of architecture, the model design is probably the sleeper hit of a project filled with hits. The team adapted SAM 1 to video by adding streaming memory for real-time video processing:Specifically adding memory attention, memory encoder, and memory bank, which surprisingly ablated better than more intuitive but complex architectures like Gated Recurrent Units.One has to wonder if streaming memory can be added to pure language models with a similar approach… (pls comment if there's an obvious one we haven't come across yet!)Video PodcastTune in to Latent Space TV for the video demos mentioned in this video podcast!Timestamps* [00:00:00] The Rise of SAM by Udio (David Ding Edit)* [00:03:07] Introducing Nikhila* [00:06:38] The Impact of SAM 1 in 2023* [00:12:15] Do People Finetune SAM?* [00:16:05] Video Demo of SAM* [00:20:01] Why the Demo is so Important* [00:23:23] SAM 1 vs SAM 2 Architecture* [00:26:46] Video Demo of SAM on Roboflow* [00:32:44] Extending SAM 2 with other models* [00:35:00] Limitations of SAM: Screenshots* [00:38:56] SAM 2 Paper* [00:39:15] SA-V Dataset and SAM Data Engine* [00:43:15] Memory Attention to solve Video* [00:47:24] "Context Length" in Memory Attention* [00:48:17] Object Tracking* [00:50:52] The Future of FAIR* [00:52:23] CVPR, Trends in Vision* [01:02:04] Calls to ActionTranscript[00:00:00] [music intro][00:02:11] AI Charlie: Happy Yoga! This is your AI co host Charlie. Thank you for all the love for our special 1 million downloads Wins of AI Winter episode last week, especially Sam, Archie, Trellis, Morgan, Shrey, Han, and more. For this episode, we have to go all the way back to the first viral episode of the podcast Segment Anything Model and the Hard Problems of Computer Vision, which we discussed with Joseph Nelson of Roboflow.[00:02:39] AI Charlie: Since Meta released SAM 2 last week, we are delighted to welcome Joseph back as our fourth guest co host to chat with Nikhila Ravi, Research Engineering Manager at Facebook AI Research and lead author of SAM 2. Just like our SAM 1 podcast, this is a multimodal pod because of the vision element, so we definitely encourage you to hop over to our YouTube at least for the demos, if not our faces.[00:03:04] AI Charlie: Watch out and take care.[00:03:10] Introducing Nikhila[00:03:10] swyx: Welcome to the latest podcast. I'm delighted to do segment anything to our first, one of our very first viral podcasts was segment anything one with Joseph. Welcome back. Thanks so much. And this time we are joined by the lead author of Segment Anything 2, Nikki Ravi, welcome.[00:03:25] Nikhila Ravi: Thank you. Thanks for having me.[00:03:26] swyx: There's a whole story that we can refer people back to episode of the podcast way back when for the story of Segment Anything, but I think we're interested in just introducing you as a researcher, as a, on the human side what was your path into AI research? Why, you know, why did you choose computer vision coming out of your specialization at Cambridge?[00:03:46] Nikhila Ravi: So I did my undergraduate. Degree in engineering at Cambridge university. The engineering program is very general. So first couple of years, you sort of study everything from mechanical engineering to fluid mechanics, structural mechanics, material science, and also computer science.[00:04:04] Nikhila Ravi: Towards the end of my degree, I started taking more classes in machine learning and computational neuroscience, and I really enjoyed it. And actually after graduating from undergrad, I had a place at Oxford to study medicine. And so I was. Initially planning on becoming a doctor, had everything planned and then decided to take a gap year after finishing undergrad.[00:04:28] Nikhila Ravi: And actually that was around the time that sort of deep learning was emerging. And in my machine learning class in undergrad, I remember one day our professor came in and that was when Google acquired DeepMind. And so that became like a huge thing. We talked about it for the whole class. It kind of really stuck.[00:04:48] Nikhila Ravi: And I was kicked off thinking about, okay, maybe I want to try something different other than medicine. Maybe this is a different path I want to take. And then in the gap year, I did a bunch of coding, worked on a number of projects. Did some sort of freelance contracting work. And then I got a scholarship to come and study in America.[00:05:06] Nikhila Ravi: So I went to Harvard for a year, took a bunch of computer science classes at Harvard and MIT, worked on a number of AI projects, especially in computer vision. I really, really enjoyed working in computer vision. I applied to Facebook and got this job at Facebook, and I've now at Facebook at the time, now Meta, and I've been here for seven years, so very circuitous path, probably not a very unconventional, I didn't do a PhD, I'm not like a research, typical research scientist, definitely came from more of an engineering background, but since being at Meta, Have had amazing opportunities to work across so many different interesting problems in computer vision from 3D computer vision.[00:05:50] Nikhila Ravi: How can you go from images of objects to 3D structures and then going back to 2D computer vision and actually understanding the objects and the pixels and the images themselves. So it's been a very interesting journey over the past seven years.[00:06:05] swyx: It's weird because like, I guess with segment anything too, it's like 4D because you solve time, you know, you started with 3D and now you're solving the 4D.[00:06:14] Nikhila Ravi: Yeah, it's just going from 3D to images to video. It's really covering the full spectrum. And actually, one of the nice things has been, so I think I mentioned I, Wanted to become a doctor, but actually Sam is having so much impact in medicine, probably more than I could have ever had as a doctor myself. So I think, you know, hopefully Sam too can also have a similar sort of impact in medicine and other fields.[00:06:39] The Impact of SAM 1 in 2023[00:06:39] swyx: Yeah. I want to give Joseph a chance to comment. Does that also mirror your, we know your story about going into, into vision, but like in the past year, since we did our podcast on Sam what's been the impact that you've seen?[00:06:51] Joseph Nelson: Segment anything. Set a new standard in computer vision, you know recapping from from the first release to present Sam introduces the ability for models to near zero shot meaning without any training identify kind of perfect polygons and outlines of items and objects inside images and that capability previously required a Lots of manual labeling, lots of manual preparation, clicking very meticulously to create outlines of individuals and people.[00:07:25] Joseph Nelson: And there were some models that attempted to do zero shot segmentation. of items inside images, though none were as high quality as segment anything. And with the introduction of segment anything, you can pass an image with SAM1, SAM2 videos as well, and get perfect pixel perfect outlines of most everything inside the images.[00:07:52] Joseph Nelson: Now there are some edge cases across domains and Similar to the human eye, sometimes you need to say, like, which item maybe you most care about for the downstream task and problem you're working on. Though, SAM has accelerated the rate at which developers are able to use computer vision in production applications.[00:08:13] Joseph Nelson: So, at RoboFlow, we were very quick to enable the community of computer vision developers and engineers to use SAM and apply it to their problems. The principle ways of using SAM, you could kind of use SAM as is to like pass an image and receive back masks. Another use case for SAM is in preparation of data for other types of problems.[00:08:37] Joseph Nelson: So, for example, in the medical domain, let's say that you're working on a problem where you have a bunch of images from a wet lab experiment. And from each of those images, you need to count the presence of a particular protein that reacts to some experiment. To count all the individual protein reactions, You can go in and lab assistants to this day will still like kind of individually count and say what are the presence of all those proteins.[00:09:07] Joseph Nelson: With Segment Anything, it's able to identify all of those individual items correctly. But often you may need to also add like a class name to what the protein is. Or you may need to say, hey, like, I care about the protein portion of this. I don't care about the rest of the portion of this in the image.[00:09:26] Joseph Nelson: And, or what it encourages and asks for the user to do is to provide some visual prompting to say, hey, which part, like, Sam says, hey, I can find segments of anything, but which segments do you care about? And so you can do visual prompting, which is kind of a new primitive that Sam introduced. And so at RoboFlow, we have one portion of our tool stack enables users to very quickly label data.[00:09:48] Joseph Nelson: With segment anything, Sam can already provide, hey, here's where I see the outlines of objects. Or a user can click to prompt to say, Hey, here's where the outlines of objects matter. And I recently pulled statistics from the usage of SAM in RoboFlow over the course of the last year. And users have labeled about 49 million images using segment anything on the hosted side of the RoboFlow platform.[00:10:12] Joseph Nelson: And that's like 5 million in the last 30 days alone. And of those images, We did kind of like a rough bafka napkin calculation of like how much time that has saved. Because, again, the alternative is you're clicking individual points to create a polygon, and with SAM you just click once and it guesses where the polygon is.[00:10:32] Joseph Nelson: And I'm sure in a bit we can maybe screen share and show some examples of what this experience is like. And in that time estimation, it's like, On average saves, you know, maybe a dozen or so seconds. And we estimate that this is probably saved on the order of magnitude of 35 years of time for users.[00:10:53] Nikhila Ravi: That's incredible.[00:10:54] Joseph Nelson: So, I mean, basically like in the first, the first year of a model being available, not only can you say, Hey, I'm just going to go use this model, those numbers that like 49 million images. is an estimate directly related to just the hosted side. So imagine all of the users that are self hosting or using SAM for robotics applications or out in the field or offline where it's not even, like, the time or the image counts are tabulated.[00:11:20] Joseph Nelson: And we're probably talking about, you know, just a fraction of the amount of value that's actually being produced for a number of downstream tasks. So to say that the impact has been You know, people use terms like game changing and these sorts of things. It has changed the industry. It's set a new standard.[00:11:36] Joseph Nelson: And with the release of SAM 2, I think we're about to see an acceleration of those capabilities for a lot of reasons.[00:11:42] Nikhila Ravi: That's really great to hear. I think one of the, really SAM 1 was. How many fields actually rely on manual segmentation? I think we're not really exposed to that. Maybe you are at Roboflow because you get to see all the users of these tools.[00:11:57] Nikhila Ravi: But for me, it was, you know, people working on understanding coral reef bleaching or farmers counting their cows and so many different applications that as a researcher. You never get exposed to, but you can have impact towards. So I think that was really awesome to hear.[00:12:15] Do People Finetune SAM?[00:12:15] swyx: So as sort of audience surrogate, who knows less than the two of you, I'm going to ask a really dumb question maybe, but is everyone using stock, a segment, anything?[00:12:23] swyx: Are they fine tuning for the medical domain? Like how on earth could it work for the medical field without fine tuning, right? Like, is that a thing?[00:12:32] Nikhila Ravi: So I mean, I can give a quick perspective from the research side. So one of the things, design decisions we made in SAM was to not have class labels. And so all the data is annotated in a class agnostic way.[00:12:48] Nikhila Ravi: So anything that has a boundary, we consider to be an object. So for example, in any image, there's lots of small objects. We might not know what the name of them are, but they're If you can draw a boundary around it, so you can imagine that we have 11 million images in the SA 1B dataset, we annotated all the objects, there's many, many small objects.[00:13:12] Nikhila Ravi: And so if you think about cells, they're also kind of small objects, there's probably things in the training data. That looked like it, but we didn't have to label it. And so that means that even when you use SAM for applications that it wasn't really trained for, because we didn't restrict it to a certain set of categories, you can actually use it out of the box without custom adaptation.[00:13:35] Nikhila Ravi: But having said that, there's probably certain domains where you need some expertise in order to be able to segment something properly. And for those use cases, Having some extra fine tuning data would probably help, and we've sort of seen that there's some papers that have come out that do this, and, you know, we'd love to hear, Joseph, how people are collecting data with SAM and fine tuning for their use cases.[00:13:59] Joseph Nelson: Once SAM came out, there were adaptations that said, could we use SAM to be, you know, like, efficient SAM? Like, basically take SAM and maybe accelerate it. And then there were domain adapted SAMs, like CellSAM, for example, out of the UC system. Now, what's interesting is, there's, like, adapting SAM to a domain, there's kind of two ways by which that's done.[00:14:21] Joseph Nelson: One is, as you mentioned, like, potentially SAM doesn't have a good concept of The objects of interest. And so you need to do domain adaptation and increase the accuracy for zero shot prediction. The second way though, is it's not fine tuning. It's actually just prompting. It's just guiding the model existing knowledge.[00:14:42] Joseph Nelson: to say which segments you care about. And both those are actually kind of equally important on the application side. You need to, like, a priori ensure that the objects of interest can be correctly segmented and maybe collect data to do that. But even if you had, like, a perfect SAM, like an omniscient SAM that could see every segment in every domain with all pixels perfectly outlined, in production, you would still need some way to Almost like signal to the model what you care about like to paint this picture if you are like a retailer and you are providing Photos of models wearing your clothing on your retail site You may care about you know only the shirt and Sam by default might segment the full person And so there's you know visual prompting that you can do to ensure that you only outline Maybe the shirt for the purposes of swapping in and out different shirts for displaying a given model on a retail page You And so I think what's interesting is that's where, like I wouldn't call it domain adaptation, but that's where, like, when you apply to industry, like, one thing that's particularly important with tooling and enabling SAM to reach its full potential.[00:15:51] swyx: That's really encouraging to hear. I should also think, like, you know, the last time we talked about this, we wanted to, the very natural addition on the class labeling side is the grounding Dino work, right? So I think people, built a grounding SAM and all the other extensions.[00:16:05] Video Demo of SAM[00:16:05] swyx: I think it's, it's probably a good time to cut to a quick demo of SAM2 for people who are, who are tuning in for SAM2 and who better to demo SAM2 than Nikki.[00:16:15] Nikhila Ravi: Sure. So I'll try to narrate what I'm what I'm doing. So audio listeners can also understand. So we have a web demo where anyone can try SAM2 on a video. Here we have a video of someone kicking a football, and I'm going to click on the football to select the object in the first frame. But you can actually select the object in any frame of the video, and this will work.[00:16:40] Nikhila Ravi: The next step is to hit track. So the model's now tracking this in real time. We don't save any of this, it's all running in real time. And now you can see the ball has been tracked throughout the entire video. There's even like a little bit of a challenging case here where the shoe covers the football.[00:16:59] Nikhila Ravi: And actually, you know, the model makes a little bit of a mistake, but that's okay. Because we can actually, here, the model makes a little bit of a mistake here. But you know, we can actually add a refinement click. You can add negative clicks until we get the mask that we want on this frame. And then you can hit track again, and the model will track the object, taking into account the additional information I've provided at that frame.[00:17:25] Nikhila Ravi: We've also added a couple of other fun things you can do on top of the track, like add effects. We can add you know, foreground effects, background effects. And these are just ways of showing how we can use the output from SAM2 as part of other tools like video editing tools. Other systems, so this is just a preview of what you can do with SAM2, but the really cool use cases are places where we might not have even imagined SAM2 being useful.[00:17:54] Nikhila Ravi: So we have a number of examples of things you might want to use it for. There's like underwater videos that it works actually really well for even though we, models never really seen an octopus before and octopus have a lot of moving parts that SAM2 can actually quite effectively. Keep track of all the different tentacles and we can probably see it more clearly if I desaturate the background.[00:18:18] Nikhila Ravi: We can see that actually the tracking of all the different tentacles is Quite accurate. Another challenge with video is that objects can actually become occluded. They can disappear from view and reappear. And a really fun example here is the shuffling cup game, which many of you might have seen. And so here I can click on the ball in the first frame.[00:18:41] Nikhila Ravi: I can also, You know, click on a different cup. And so here, the additional challenge is that there's three cups that look exactly the same. And then there's the ball that will get occluded by the cup. So the ball's no longer visible, the cups are all moving around, they all look the same. But the model actually keeps track of the cup that we selected.[00:19:02] Nikhila Ravi: And, as you can see at the end, here I'll jump to the end so you can see. It actually finds the cup again. I wanted to point out a couple of fun demo UX features that we added that actually really helped with this. So if you can see at the bottom, there's these swim lanes and then the swim lanes, actually the thickness of the swim lane tells you if the object's visible or not.[00:19:22] Nikhila Ravi: So at the beginning, the object's visible,[00:19:25] swyx: the object[00:19:26] Nikhila Ravi: disappears, and then the object comes back. So you can actually visually tell. When the object's being occluded and when it's not, and so it's a nice way of like, knowing if you need to go in and fix the model prediction or not. And so these are some of the UX innovations that we came up with, as well as the model innovations.[00:19:46] Joseph Nelson: One thing that I think is really notable here, there's two things. One is that like, I'd love to have a little bit of a discussion about how the models keeping track of the embedded scene to keep track of the ball and the cup in different places. Put a pause on that for a second.[00:19:59] Why the Demo is so Important[00:19:59] Joseph Nelson: One thing that Meta has put an emphasis on here in a much greater degree than other model releases is the demo experience of recognizing that in addition to having a model that can do zero shot segmentation, you've created a web experience that allows folks to kind of experience both the video effects but the types of UX innovations that encourage usage and adoption.[00:20:23] Joseph Nelson: It's actually kind of reminiscent of The underlying technology of ChatGPT was available prior to the web experience of ChatGPT. Can you talk a bit about why that was a consideration to your team and how you thought about the creation of The demo experience in tandem with training and releasing a new model.[00:20:41] Nikhila Ravi: Yeah, absolutely. I think that's a really great example of how, you know, Chad, GPT was really more of a UX innovation. Obviously it was like a number of research innovations that helped to get to this point. But as you said, like the underlying technology was around for a while. And, you know, putting this UX around as a chat interface helped tremendously with the.[00:21:03] Nikhila Ravi: Adoption and people understanding how it could be useful for real world use cases. And in computer vision, especially, it's so visual. The best way to show how these models work. Is by trying it on your own image or your own video with the original SAM, we put a lot of effort in building like a high quality demo.[00:21:23] Nikhila Ravi: And the other piece here is that the demo is actually the annotation tool. So we actually. Use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation and improves the data quality and that will improve the model quality.[00:21:43] Nikhila Ravi: With this approach, we found it to be really successful. And obviously externally, people really liked being able to try it. I think, you know, people in fields outside of machine learning would never have tried SAM if we didn't have that demo. And I think that definitely led to a lot of the adoption in, like, diverse fields.[00:22:05] Nikhila Ravi: And so because we saw that with SAM 2, like, the demo was a priority first class citizen from day one. And so we really invested in making that. And I think with SAM2 as well, we wanted to have like a step change in the demo experience. Interactive video segmentation, I think that experience is something that maybe has not had much thought given to it.[00:22:27] Nikhila Ravi: And we really wanted to be like, okay, if we are to design a step changing video segmentation experience, what would that look like? And that really did influence our model. And annotation design as well.[00:22:40] Joseph Nelson: It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.[00:22:49] Nikhila Ravi: I think it also really forces you to think about many things that you might postpone, for example, efficiency.[00:22:55] Joseph Nelson: Yes.[00:22:55] Nikhila Ravi: For a good demo experience. Making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about how to, what kind of image encoder we want to use or like other hardware efficiency improvements.[00:23:13] Nikhila Ravi: So those kinds of things, I think, become a first class citizen when you put the demo first.[00:23:19] SAM 1 vs SAM 2 Architecture[00:23:19] Joseph Nelson: That's one thing I was going to ask about, and this is related to the architecture change. So SAM1 and the SAM1 demo experience. You have the encoder that's creating the embeddings of all the potential spaces.[00:23:31] Joseph Nelson: That needs to be run on a GPU. That's a relatively intensive operation. But then the query of those embeddings can be run independently and on a cheaper process. So in the SAM1 demo, the way that it was structured, and also this is the way that we have our SAM tool structured in Robloflow as well, is images go to a GPU to get all the SAM based embeddings.[00:23:53] Joseph Nelson: But then for querying those embeddings, we do that client side, in the browser, so that the user can very quickly, you know, you can move your mouse over and you get the proposed candidate masks that Sam found for that region of the image. In SAM 2 you dropped that in the web demo. And I think that's because you made some notable improvements to the rate at which encoding happens.[00:24:16] Joseph Nelson: Can you talk a bit about what led to those speed increases and, again, how that interplays with providing a fast encryption? user experience for interacting with the model.[00:24:29] Nikhila Ravi: Yeah. So the SAM2 web demo is primarily focused on video. We, we decided to just keep it simple and focus on video and on GitHub, we have a Colab notebook that shows how to run SAM2 on images.[00:24:41] Nikhila Ravi: So if you're interested in using, replacing SAM with SAM2 for images, check out GitHub, but on the SAM2 demo, it's not as straightforward to adopt the same architecture as SAM. For video, because we can't send the per frame image embeddings for an entire video back to the front end. In SAM, each frame embedding was like four megabytes, but if you have a long video and that's like per frame, it would become impossible to send that back to the front end.[00:25:11] Nikhila Ravi: So, SAM 2 actually, in terms of the architecture details, I was actually just looking at this earlier, but SAM1 model was around 630 million parameters. It's a fraction of the size of these large language models, but very small. Actually, SAM2, the largest model, is around 224 million parameters. So it's actually One third the size of the SAM original model.[00:25:38] Nikhila Ravi: So we changed the imaging coder from A-V-I-T-H and SAM to a higher model, which has also developed by by meta. So that definitely was something that helped. And in terms of the efficiency compared to sam, so if we were to run SAM per frame on a video or run SAM two, it's around six times faster to run SAM two versus run SAM per frame.[00:26:03] Nikhila Ravi: A number of things improved the efficiency of SAM2 such that we were actually able to run this entirely on the server and not have any component in the front end. But I am very curious to see who puts this on device, like I'm pretty sure soon we'll see like an on device SAM2 or, you know, maybe even running in the browser or something, so.[00:26:25] Nikhila Ravi: I think that could definitely unlock some of these edge use cases that we were able to make a compelling web demo without having to do that.[00:26:34] swyx: Hugging face is probably already working on Transformers. js version of it, but totally makes sense. I want to talk about more about things from the paper, but I think we're still in this sort of demo section.[00:26:42] Video Demo of SAM on Roboflow[00:26:42] swyx: And so I want to hand it to Joseph for his demo to see what the RoboFlow site looks like.[00:26:47] Joseph Nelson: So I can, I can give some context into one key area that Nicola, you mentioned earlier, which is. Sam has made the decision, both Sam 1 and Sam 2, to be class agnostic in terms of its predictions. And that, you then have the ability to have a generalizable, model for zero shot capability.[00:27:05] Joseph Nelson: However, in a lot of domain applications, you do want the class wise name. And so a lot of the challenge can be adding that class wise name for the, at least the annotation to an experience that we've created. That's one of the key considerations. So I will similarly Share my screen and show an example.[00:27:27] Joseph Nelson: Here, I have a bunch of images, and there's a number of ways that I could annotate things, like I could prompt a large multimodal model with like grounding capabilities, you know, you could outsource it, or I can do manual labeling. And with the manual labeling, this is where we make use of models like segment anything.[00:27:45] Joseph Nelson: to propose candidate masks and make it faster. So we have, you know, this annotation pane and what we call the smart poly tool, which is powered by Segment Anything. This is currently Segment Anything 1. We're accelerating and seeing improvements from similar to what the paper shows of Segment Anything 2 performed better on E3.[00:28:06] Joseph Nelson: Images as well as video, but with a segment, anything I'm able to basically prompt regions of my image of interest. So for example, if like, I wanted to say, I want to like add the drum set. You'll see here that like, the original candidate proposal is just the base drum, but let's say I wanted the whole drum set.[00:28:26] Joseph Nelson: So the UX primitive of being able to add and subtract candidate regions of interest is really intuitive here. And now, great, I have this outline, but in fact what I want is, I want to name that as a class. Because maybe for the model that I'm building, I want to build like a task specific model, you know, like an object detection model or an instant segmentation model.[00:28:50] Joseph Nelson: Or, you know, maybe I'm even using like a multimodal model and I want that multimodal model to refer to regions of interest in the images as a specific thing. And so I think what's, you know, really powerful is, of course, like, I get this really rich zero shot prediction. And here we have our friend Rick.[00:29:10] Joseph Nelson: So I get this really rich candidate set of predictions. But then by adding the class wise label, I can, you know, very quickly make sure that any downstream tasks are aware not just of the segment, but also of the, what is inside that segment. Which actually takes me to A separate point of something that I predict that's probably going to happen and Nikhil, I'm actually kind of interested why maybe your team made a conscious decision to not do this initially with SAM2.[00:29:40] Joseph Nelson: There's been an emergent set of models that are also adding open text prompting capabilities to grounding models. So for example, like you've seen models like Grounding Dino or Owlvit, which, you know, you can do. Even image to image or text to image based prompting to find regions of interest. And maybe maybe I can actually give an example of that even in the context of this same data.[00:30:05] Joseph Nelson: So if I wanted to try out, you know, grounding dino on this same set of images, I could try out, you know, prompting grounding dino for a set of different classes. And what's notable is let's do, I don't know, let's prompt for person and we'll prompt for person and prompt for I don't know, microphone.[00:30:26] Joseph Nelson: NLASC or microphone. Here I can text prompt the image and then the understanding, in this case Grounding Dino's understanding, of where people are in this image allows me to create, in this case, bounding boxes, but, you know, soon you can do segmentations or in tandem with SAM do segmentations. And, you know, we've already seen applications of using SAM2 in tandem with models like Grounding Dino or Florence 2.[00:30:54] Joseph Nelson: So that people can basically text prompt and then get the benefits of the zero shot segmentation at the same time as getting the open form querying. And in doing so, you know, we maintain a framework called like autodistill so like folks can very quickly, you know, bring some images and then using autodistill to find some ontology and then prompt and say what you want from that ontology.[00:31:19] Nikhila Ravi: So you already do this for video as well?[00:31:21] Joseph Nelson: You can apply videos or groups of images, yes. So this is using a project called Autodistill. And the concept of Autodistill is, use a base model, like a big base model, which could be like SAM or Grounding Dino, and then you pass a directory of images, which also could be video, broken into individual frames, and you pass an ontology as well.[00:31:43] Joseph Nelson: So an example I was just showing was like the hello world we have, which is like a shipping container. And then the combination of the grounding capabilities of, in the example I was showing, Florence 2 plus SAM, looks for the concept of container, and then SAM does the rich segmentation of turning that concept of container into the candidate proposal of the region, so that a user could just say, hey, I want all the shipping containers, run this across a bunch of images or video frames, And then get back the class wise labels plus the regions of interest.[00:32:17] Joseph Nelson: And this feels like a natural extension. And in fact, like the open form grounding capabilities between SAM1 and SAM2 became something the field was broadly doing. So I'm curious, like, from your perspective, one of the things I thought maybe SAM2 would do is actually add this capability natively. So I'm curious to hear, like, the conscious decision to say, hey, we want to continue to be class agnostic.[00:32:39] Extending SAM 2 with other models[00:32:39] Joseph Nelson: We don't want to add yet maybe open form text prompting as a part of finding the segments and parts of images. And I'd love to hear about like the decision to think about it that way. And if you are encouraged or if you want kind of like what's happening here where people are naturally combining these capabilities as something that you would expect and encourage to happen despite not having it.[00:33:00] Joseph Nelson: In the base model itself.[00:33:02] Nikhila Ravi: Yeah, it's a great question. So I think it's really cool that the community is taking SAM and taking SAM 2 and building on top of it and coming up with cool applications. We love to see that. That's exactly why we open source our work. And then in terms of why we didn't put it into SAM 2, so as you've probably seen with SAM and SAM 2, it's a fairly narrow problem.[00:33:25] Nikhila Ravi: But we really tried to make it a step change in the capability. And so with each version, we are trying to limit the focus on one thing that we can know we can do really well. And in this case, like the first SAM, it was class agnostic segmentation, but can we do it so well that it's effectively solved?[00:33:47] Nikhila Ravi: And similarly, can we do that same thing, but with Video segmentation. So one step at a time, we are working on each of these problems one at a time so that we can actually deliver something that's really world class and step changing.[00:34:03] Joseph Nelson: So does that mean SAM 3 will have the text prompting? Problem is like the next challenge.[00:34:09] Nikhila Ravi: Who knows, who knows? Maybe the community will, will we'll build that too. So[00:34:15] Joseph Nelson: it makes sense to like very narrowly do something very well. And that's, I think, proven to be well accomplished.[00:34:21] Nikhila Ravi: It's like taking the, the, both the data, the model and the demo, and how can we push all three towards solving one thing really well?[00:34:30] Nikhila Ravi: So we found that. That's like a good recipe and that's what we've limited the focus of these, of each of these models.[00:34:38] swyx: This development reminds me of how, you know, when you do, and you break out the interpretability of ConvNets and you can see like, Oh, this is the edge detection one. I feel like SAM is the edge detection version equivalent.[00:34:51] swyx: And then you build up to whatever the next feature is on top of that.[00:34:54] Limitations of SAM: Screenshots[00:34:54] Joseph Nelson: Can I bring up one? Limitation of SAM. So like we've like even SAM one, SAM two, and the monitor is released at 4 PM Pacific on Monday. We're recording this on 11 AM Pacific on, on, on Thursday. So the, it's very fresh for a lot of the capabilities and.[00:35:09] Joseph Nelson: It is so clear that it is a stepwise change in the capability that, Nikhila, you mentioned your team wants to do, which is extend SAM's zero shot class agnostic capability to video, like, A plus, kind of mission accomplished. One thing that's interesting is finding, like, domain problems where there might be still domain applicability and domain adaptation that is available.[00:35:32] Joseph Nelson: One benchmark that we introduced at CBPR is this thing called RF100, which is like, seven different domain type problems that the industry commonly is working on in vision, like underwater document processing, aerial examples, medicine examples. And one place where interestingly segment anything maybe less performant than other models is handling screenshots.[00:35:57] Joseph Nelson: For example, like a lot of folks that are building agents to interact with the web are particularly interested in that challenge of given a screenshot of a computer, what are all the buttons. And how could I autonomously navigate and prompt and tell it to click? And I can show an example of like maybe what, how like Sam kind of performs on this challenge just to outline some of the context of this problem.[00:36:23] Joseph Nelson: But I'm curious like how you think about limitations like this and what you would expect to want to be the case. So here I just have a notebook where I run Sam on the source image on the left. Or the source image on the left and then Sam output is on the right. And this is just a screenshot of, of a website where we just grab like the top 100 websites by traffic and grab screenshots from them.[00:36:42] Joseph Nelson: One example of a place where I could see the community improving on Sam, and I'm curious how you think about this challenge and maybe why Sam is less well adapted for this type of problem. Is processing screenshots. So I'll share my screen to give an example for, for viewers that are participating here, you see like an example, a screenshot of a website on the left, and then right is SAM two running on that image.[00:37:06] Joseph Nelson: And in the context of agents, folks usually want to have like, Hey, tell me all of the buttons that a, an agent could press. Tell me like maybe the headlines of the articles tell me the individual images and Sam two behaves perhaps predictably, where it outlines like people in the images and like some of like the, the screen text.[00:37:22] Joseph Nelson: I'm curious, like, how you think about a challenge like this for a model that sees everything in the world, what about handling digital contexts? And Why maybe it could perform better here and how you would expect to see improvement for domains that might have been out of distribution from the training data?[00:37:40] Nikhila Ravi: Yeah, this is a good question. So fair, we don't really build with a specific use case in mind. We try to build like these foundational models that can be applied to lots of different use cases out of the box. So I think in this kind of example, potentially people might want to annotate some data.[00:37:59] Nikhila Ravi: Fine tune on top of what we release. I think we probably won't build things that are very custom for different use cases. I think that's not a direction we'll go in, but as you said, like the model is an annotation tool to improve the model. And so I think that's definitely the approach we want to take is we provide the tools for you to improve the model as well as the model itself.[00:38:27] Joseph Nelson: That makes sense. Focus on like as many. Multi or zero shot problems and then allow the community to pick up the torch for domain adaptation.[00:38:34] Nikhila Ravi: Yeah, absolutely. Like, we can't solve all the problems ourselves. Like, we can't solve all the different domains. But if we can provide a sort of base hammer tool, and then people can apply it to all their different problems.[00:38:48] SAM 2 Paper[00:38:48] swyx: If you don't mind, I guess we want to transition to a little bit on like asking more questions about the paper.[00:38:53] Udio AI: Sure.[00:38:54] swyx: There's a lot in here. I love the transparency from Meta recently with like LLAMA 3 last week and then, and was it last week? Maybe, maybe a little bit less than last week. But just like just really, really well written and a lot of disclosures, including the data set as well.[00:39:08] SA-V Dataset and SAM Data Engine[00:39:08] swyx: I think the top question that people had on the data set, you know, you release a diverse videos and there was, there's a lot of discussion about the data engine as well, which I really love. And I think it's innovative if you wanted. I think the top question is like, how do you decide the size of data set?[00:39:22] swyx: You know, what were you constrained by? People are asking about scaling laws. You had some ablations, but as a research manager for this whole thing, like how do you decide what you need?[00:39:32] Nikhila Ravi: Yeah. I mean, it's a great question. I think it's, as with all papers, you write them at the end of the project, so we can put these nice plots at the end, but going into it, I think, you know, the data engine design really follows.[00:39:47] Nikhila Ravi: So, this is sort of the model design, how we thought about the task, how we thought of the model capabilities. You can really see it's reflected in the different phases of the data engine. We started with just SAM, we apply SAM per frame. That's like the most basic way of extending SAM to video. Then the most obvious thing to do is to take the output masks from SAM and then provide it as input into a video object segmentation model that takes the mask as the first frame input.[00:40:19] Nikhila Ravi: And that's exactly what we did. We had SAM plus a version of SAM2 that only had mask as input. And then in the last phase, we got rid of SAM entirely and just had this one unified model that can do both image. And video segmentation. And I can do everything in just one model. And we found that, you know, going from each phase, it both improved the efficiency and it improved the data quality.[00:40:46] Nikhila Ravi: And in particular, when you get rid of this two part model, one of the advantages is that when you make refinement clicks, so, You prompt the model in one frame to select an object, then you propagate those predictions to all the other frames of the video to track the object. But if the model makes a mistake and you want to correct it, when you have this unified model, you only need to provide refinement clicks.[00:41:14] Nikhila Ravi: So you can provide maybe a negative click to remove a region or a positive click to add a region. But if you had this decoupled model, you would have to Delete that frame prediction and re annotate from scratch. And so you can imagine for more complex objects, this is actually adding like a lot of extra time to redefine that object every time you want to make a correction.[00:41:39] Nikhila Ravi: So both the data and the data engine phases really follow, like how we thought about the model design and the evolution of the capabilities, because it really helped us to do that. improve the data quality and the annotation efficiency as well.[00:41:54] swyx: Yeah, you had a really nice table with like time taken to annotate and it was just going down and down.[00:41:58] swyx: I think it was like down by like 90 percent by the time you hit stage[00:42:02] Joseph Nelson: three, which is kind of cool. We joke that when SAM 1 came out at RoboFlow, we're like, was this purpose built for our software? Like you have like the embedding, you have the embedding take like a big model and the querying of the embeddings A smaller model that happens in browser, which felt remarkably aligned.[00:42:18] Joseph Nelson: Now hearing you talk about how you think about building models with a demo in mind, it makes sense. Like, you're thinking about the ways that folks downstream are going to be consuming and creating value. So, what felt like maybe a coincidence was perhaps a deliberate choice by Meta to take into account how industry is going to take Seminal advances and apply them.[00:42:36] Nikhila Ravi: Yeah. And it's not just humans. Like it could also be a model that outputs boxes that then get fed into this model. So really thinking about this as a component that could be used by a human or as a component, as part of a, of a larger AI system. And that has, you know, a number of design requirements. It needs to be promptable.[00:42:56] Nikhila Ravi: It needs to be, have the zero shot generalization capability. We, you know, need it to be real time and. Those requirements really are very core to how we think about these models.[00:43:08] Memory Attention to solve Video[00:43:08] swyx: I cannot end this podcast without talking about the architecture, because this is your, effectively the sort of research level, architecture level innovation that enabled what I've been calling object permanence for SAM.[00:43:22] swyx: And it's memory retention. What was the inspiration going into it? And you know, what did you find?[00:43:27] Nikhila Ravi: Yeah, so at a high level, the way we think about extending SAM to video is that an image is just a special case of a video that just has one frame. With that idea in mind, we can extend the SAM architecture to be able to support segmentation across videos.[00:43:45] Nikhila Ravi: So this is a quick video that shows how this works. So SAM architecture, we have the image encoder, we have a prompt encoder, we have a mask decoder. You can click on an image. And that basically is a prompt, we use that prompt along with the image embedding to make a mask prediction for that image. Going to SAM2, we can also apply SAM2 to images because we can, you know, as I said, treat an image as a video with a single frame.[00:44:15] Nikhila Ravi: And so when we, in the SAM2 architecture, we introduce this new memory mechanism that consists of three main components. There's memory attention, there's a memory encoder, and then there's a memory bank. And when we apply SAM2 to images, these are effectively not used. And the architecture just collapses down to the original SAM architecture.[00:44:35] Nikhila Ravi: But when we do apply this to video, the memory components become really useful because they provide the context of the target object from Other frames. And so this could be from past frames. It can be from, there's two types of memory. So there's like the condition, conditional frames or the prompted frames, which are basically the frames at which a user or a model provides input like clicks.[00:45:01] Nikhila Ravi: And then there's like the surrounding frames. And say we use six frames around the current frame as memory of the object. So there's, there's those, those, both those types of memory that we use to make the prediction. Going into a little bit more detail about that, there's like two kinds of memory that we use.[00:45:18] Nikhila Ravi: So one is like spatial memory. So it's like this high resolution memory that captures the spatial details. And then we also have this like longer term object pointer memory that captures some of the sort of higher level concepts. And I think Swyx, you had a comment about how does this relate to sort of context window and LLMs.[00:45:37] Nikhila Ravi: And both of these types of memories have some relation to context window, so they both provide different types of information on the spatial side or in terms of the concept of the objects that we want to track. And so we found that having like six frame length for the spatial memory, Coupled with this longer period of the object pointer memory provides strong video segmentation accuracy at high speed.[00:46:01] Nikhila Ravi: So, as I mentioned, the real time aspect is really important. We have to find this speed accuracy trade off. And one way in which we sort of circumvent this is by allowing additional prompts on subsequent frames. So even if the model makes a mistake, maybe it loses the object. After an occlusion, you can provide another prompt, which actually goes into the memory.[00:46:24] Nikhila Ravi: And so the prompted frames are always in the memory. And so if you provide a prompt on a frame, we will, or the model will always remember what you provided. And so that's a way in which we can sort of avoid some of the model failure cases that actually is a big limitation of current models, current video object segmentation models.[00:46:45] Nikhila Ravi: Don't allow any way to recover if the model makes a mistake. And so, Joseph, going back to your point about the demo, that's something that we found just by playing with these models. There's no way to make a correction, and in many real world use cases, like, it's not going to be a one time prediction, but you actually want to be able to intervene, like, if an LLM makes a mistake, you can actually be like, no, actually do it this way, and provide feedback, and so, We really want to bring some of that thinking into how we build these computer vision models as well.[00:47:16] "Context Length" in Memory Attention[00:47:16] swyx: Amazing. My main reaction to finding out about the context length of eight input frames and six pass frames as their default is why not 60? Why not 600? In text language models, we're very used to severely extending context windows. And what does that do to the memory of your model?[00:47:35] Nikhila Ravi: So I think maybe one, one thing that's different is that the object in video, it is challenging.[00:47:41] Nikhila Ravi: Objects can, you know, change in appearance. There's different lighting conditions. They can deform, but I think a difference to language models is probably the amount of context that you need is significantly less than maintaining a long multi time conversation. And so, you know, coupling this. Short term spatial memory with this, like, longer term object pointers we found was enough.[00:48:03] Nikhila Ravi: So, I think that's probably one difference between vision models and LLMs.[00:48:09] Object Tracking[00:48:09] Joseph Nelson: I think so. If one wanted to be really precise with how literature refers to object re identification, object re identification is not only what SAM does for identifying that an object is similar across frames, It's also assigning a unique ID.[00:48:25] Joseph Nelson: How do you think about models keeping track of occurrences of objects in addition to seeing that the same looking thing is present in multiple places?[00:48:37] Nikhila Ravi: Yeah, it's a good question. I think, you know, SAM2 definitely isn't perfect and there's many limitations that, you know, we'd love to see. People in the community help us address, but one definitely challenging case is where there are multiple similar looking objects, especially if that's like a crowded scene with multiple similar looking objects, keeping track of the target object is a challenge.[00:49:03] Nikhila Ravi: That's still something that I don't know if we've solved perfectly, but again, the ability to provide refinement clicks. That's one way to sort of circumvent that problem. In most cases, when there's lots of similar looking objects, if you add enough refinement clicks, you can get the perfect track throughout the video.[00:49:22] Nikhila Ravi: So definitely that's one way to, to solve that problem. You know, we could have better motion estimation. We could do other things in the model to be able to disambiguate similar looking objects more effectively.[00:49:35] swyx: I'm just interested in leaving breadcrumbs for other researchers, anyone interested in this kind of architecture.[00:49:41] swyx: Like, are there papers that you would refer people to that are influential in your thinking or, you know, have, have other interesting alternative approaches?[00:49:49] Nikhila Ravi: I think there's other ways in which you can do tracking and video. You might not even need the full mask. I think that's it. Some other works that just track like points on objects.[00:49:59] Nikhila Ravi: It really, really depends on what your application is. Like if you don't care about the entire mask, you could just track a bounding box. You could just track a point on an object. And so having the high fidelity mask might not actually be necessary for certain use cases. From that perspective, you might not need the full capabilities.[00:50:19] Nikhila Ravi: of SAM or SAM2. There's many different approaches to tracking, I think I would encourage people to think about like what actually they need for their use case and then try to find something that that fits versus, yeah, maybe SAM2 is too much, you know, maybe you don't even need the full mask.[00:50:37] swyx: Makes total sense, but you have solved the problem that you set out to solve, which is no mean feat, which is something that we're still appreciating even today.[00:50:44] The Future of FAIR[00:50:44] swyx: If there are no further questions, I would just transition to sort of forward looking, future looking stuff. Joseph already hinted at, like, you know, our interest in SAM and the future of SAM, and obviously you're the best person to ask about that. I'm also interested in, like, How should external people think about FAIR, you know, like there's this stuff going on, this llama, this chameleon, this voice box, this image bind, like, how is, how are things organized?[00:51:09] swyx: And, you know, where are things trending?[00:51:11] Nikhila Ravi: Yeah, so in FAIR, we, you know, we have a number of different research areas. I work in an area called perception. So we built vision systems that solve basically, Look at all the fundamental problems in Compute Division. Can we build a step change in all of these different capabilities?[00:51:29] Nikhila Ravi: SAM was one example. SAM2 is another example. There are tons of other problems in Compute Division where we've made a lot of progress, but can we really say that they're solved? And so that's really the area in which I work on. And then there's a number of other research areas in language and in embodied AI.[00:51:49] Nikhila Ravi: And more efficient models and various other topics. So fair in general is still very much pushing the boundaries on solving these foundational problems across different domains. Well,[00:52:07] swyx: fair enough, maybe just outside of fair, just the future of computer vision, right?[00:52:10] CVPR, Trends in Vision[00:52:10] swyx: Like you are very involved in the community. What's the talk of the town at CVPR? Both of you went, who's doing the most interesting work? It's a question for both of you.[00:52:19] Joseph Nelson: I think the trends we're seeing towards more zero shot capability for common examples will accelerate. I think Mutu modality, meaning using, you know, images in tandem with text for richer understanding or images and video in tandem with audio and other mixed media will be a continued acceleration trend.[00:52:43] Joseph Nelson: The way I kind of see the field continuing to progress, the problem statement of computer vision is making sense of visual input. And I think about the world as the things that need to be observed follow your traditional bell curve, where like things that most frequently exist out in the world are on the center of that bell curve.[00:53:05] Joseph Nelson: And then there's things that are less frequently occurring that are in those long tails. For example, you know, as back as like 2014, you have the Cocoa data set, which sets out to say, Hey, can we find 80 common objects in context, like silverware and fridge and these sorts of things. And we also conceptualized the challenge of computer vision in terms of breaking it down into individual task types, because that's like the tools we had for the day.[00:53:29] Joseph Nelson: So that's why, you know, you have the origination of classification, object detection, instant segmentation. And then as you see things continue to progress. You have models and things that need to observe areas in the long tails. And so if you think of the Cocoa dataset as the center of that bell curve, I think of like the long tails, like really edge case problems.[00:53:49] Joseph Nelson: Some of our customers like Rivian, for example, only Rivian knows what the inside of like a Rivian should look like as it's assembled and put together before it makes its way to a customer and they're making custom parts. Right? So how could a model you've been trained on the things that go inside the componentry of producing a vehicle and Andreesen, What's kind of happening with computer vision is you're seeing models that generalize in the middle of the bell curve push outward faster.[00:54:17] Joseph Nelson: That's where you see the advent of like open text models or the richness of understanding of multimodal models. To allow richer understanding without perhaps any training, or maybe just using pre training and applying it to a given problem. And then, there's like, you know, kind of like the messy middle in between those two, right?[00:54:38] Joseph Nelson: So like, Akila kind of talked about examples where SAM does well out of distribution, where like, it finds an octopus, even though there wasn't octopi in the training data. I showed an example where, like, screenshots, where Sam isn't yet super great at screenshots, so maybe that's, like, in the messy middle or in the longer tails for now.[00:54:54] Joseph Nelson: But what's going to happen is there needs to be systems of validating the point of view that I think about, like, tooling to also validate that models are doing what we want them to do, adapting to datasets that we want them to adapt to. And so there's a lot of things on a forward looking basis that allow propelling that expansion of generalizability.[00:55:14] Joseph Nelson: That's for open text problems. That's where scaling up of training, of dataset curation, continues to play a massive role. Something that's notable, I think, about SAM2 is it's, what, 57, 000 videos? 51,[00:55:30] Nikhila Ravi: 000 videos? About 51, 000, yeah.[00:55:32] Joseph Nelson: And 100, 000 internal datasets. That's, like, not Massive, right? And the model size also isn't, you know, the largest, largest model being a couple hundred million parameters.[00:55:43] Joseph Nelson: The smallest model is 38 million parameters and can run at 45 FPS on an A100, right? Like the capabilities of, we're going to see more capable, more generalizable models. Being able to run on a higher wide array of problems with zero or multi shot capability on a faster, a faster rate. And I think the architecture innovations and things like SAM2 of memory, of increasingly like transformers making their way into division and probably blended architectures increasingly too.[00:56:15] Joseph Nelson: So my viewpoint of like on a go forward basis is we will have that bell curve of what humans can see both in the center of that curve and the long tails. And architectural changes allow richer understanding, multi and zero shot, and putting those into systems and putting those into industry and putting those into contexts that allow using them in practical and pragmatic ways.[00:56:38] Joseph Nelson: Nicola, I'd love to hear like your thought and perspective of like how you think the research trends map or don't map to that. And like maybe some of the key innovations that you saw at CVPR this year that, you know, Got you excited about the direction and maybe some promising early directions that you're thinking about researching or pushing the boundaries of further.[00:56:56] Nikhila Ravi: Yeah, I just wanted to actually reply to a couple of things that you said about so actually in video object segmentation, the number of classes. that are annotated in these, and then the size of these datasets are really small. So with SAM, it's, you know, we had a billion masks, we had 11 million images, didn't have class labels.[00:57:17] Nikhila Ravi: But even before that, there were a lot of datasets that have class labels and are annotated. With significantly more with, with like a lot of class labels, whereas in video datasets, the number of class labels are very small. So there's like YouTube VOS, which has 94 object categories, there's Mose, which has around like 30 or so object categories.[00:57:38] Nikhila Ravi: And they're usually like people, there's cars, there's dogs and cats and all these common objects, but not really, they don't really cover a very large number of object categories. And so while Sam learned this general notion of what an object is in an image. These video tracking models actually don't have that knowledge at all.[00:58:01] Nikhila Ravi: And so that's why having this data set is really important for the segment anything capability in video because if you just provide the mask as the input to an off the shelf Video object segmentation model. It might not actually be able to track that arbitrary object mask as effectively as a SAM2 model that's actually trained to track.[00:58:24] Nikhila Ravi: Any object across the entire video. So doing these sort of combining two models together to try to get a capability that will actually only get you so far and being able to actually create that the dataset to enable that anything capability, it was actually really important and we can actually see that when we do comparisons with baselines where we provide some two with the same input mask and the baseline model with the same input mask.[00:58:53] Nikhila Ravi: For example, the t shirt of a person, SAM2 can track the t shirt effectively across the entire video, whereas these baselines might actually start tracking the entire person, because that's what they're used to doing, and isolating it to just one part of the person is not something they were ever trained to do, and so those are sort of some of the limitations.

The Association 100 Podcast
Top Insights from Association Leaders

The Association 100 Podcast

Play Episode Listen Later Aug 7, 2024 12:08


Welcome back to another exciting episode of The A100 podcast! As we gear up for the ASAE Annual Conference in Cleveland, we're thrilled to bring you a special compilation of clips from previous A100 interviews with some of the brightest minds and leaders in the association world. These leaders have shared invaluable advice and insights that we believe will inspire and guide you. Be sure to go back and check out their full episodes for more! Key Highlights: Chris Michaels, CEO, AAMFT: Chris discusses the inclusion of family therapists as Medicare providers, highlighting the importance of continuity of care and the need for more therapists to support the aging population. Mike Armstrong, CEO, National Council of Architectural Registration Boards: Mike talks about the importance of recognizing multiple pathways to practice architecture, emphasizing flexibility and the use of technology to measure competency. Kinsey Fabrizio, President, Consumer Technology Association: Kinsey provides two key pieces of career advice for association professionals: asking for new opportunities with a clear purpose and finding a good mentor to guide your journey. Anita Brikman, President and CEO, Plasma Protein Therapeutics Association: Anita shares the power of advocacy and how personal stories from patients drive their mission. She emphasizes the critical role of early involvement and strategic messaging in communications. Earl Franks, Executive Director, NAESP: Earl addresses the significant stress faced by educators and school leaders, especially during the COVID-19 pandemic. He underscores the importance of supporting the social and emotional well-being of these critical community members. Wendy-Jo Toyama, CEO, American Academy of Hospice and Palliative Medicine: Wendy-Jo talks about personal growth and the importance of stretching beyond your comfort zone. She also discusses the necessity of embracing mistakes as part of the DEI learning process. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes filled with expert advice and practical strategies to help your association thrive!

Scouting for Growth
Sean Languedoc: Outsourcing 2.0 to Access a Global Talent Pool (Outsourcing 2.0)

Scouting for Growth

Play Episode Listen Later Jul 31, 2024 51:40


On this episode of the Scouting For Growth podcast, Sabine VdL talks to Sean Languedoc, a seasoned tech entrepreneur with over 25 years of experience building & scaling companies across borders. Sean shares the lessons he learned from scaling five companies, how Outforce.ai is transforming outsourcing, when startups should consider leveraging external teams, and his thoughts on how emerging technologies like generative AI or quantum computing are accelerating development cycles. He also offers advice for non-technical founders looking to build MVPs in a capital-efficient way. KEY TAKEAWAYS Each business started not because of a technology that I wanted to build, it started because of a problem I saw in an industry that I needed to solve, & I was enabled by technology to solve it. You can't just walk into an industry like InsurTech & disrupt it with technology, technology changes a lot faster than behaviour & infrastructure. The lesson there was if things go wrong, the agency is blamed & I'd be fired. It's nothing to do with technology, it's all about people. Across all businesses you have to look at who are you disrupting & how influential are they in the decision-making process? Who wins & who loses & who can you embrace for your winning approach & get momentum behind those.  BEST MOMENTS ‘Everyone will tell you you have a great idea until you ask them to pay for it, or until you understand the culture of the industry itself.' ‘For good operators who were really interested in optimising we got a lot of momentum, but for the companies that were horse-trading favours, not so much.' ‘The industry standard is 39% of projects that go to outsourcing don't work out. I'd say another 20% on top of those end up working out only because of brute force, relentless effort by the client to teach the outsourcing agency how to do it.' ‘You can't afford to take the risk of getting it wrong, you need to go in with data & research & get it right.' ABOUT THE GUEST Sean Languedoc is a seasoned tech entrepreneur with over 25 years of experience building & scaling companies. He has founded five tech ventures across various domains, successfully taking two of them "south" from Canada to the US. Currently, Sean is the CEO of Outforce.ai, a company that transforms outsourcing from a daunting task into a strategic asset for venture-backed startups. Outforce.ai aims to be the catalyst that propels tech ventures to their next phase of growth by connecting them with the right engineering teams globally. Beyond his role at Outforce.ai, Sean is deeply involved in the startup ecosystem as a mentor, guiding entrepreneurs through the complex landscape. He serves as a board member at A100 & a Charter Member at C100, underscoring his commitment to fostering tech innovation and entrepreneurship in Canada & beyond. With his extensive expertise in international collaboration, recruiting, & navigating cultural nuances, Sean brings valuable insights on scaling teams, leveraging outsourcing effectively, and adapting to the rapidly evolving tech landscape.  His unique perspective, shaped by building companies across borders, makes him an insightful guest to discuss growth strategies for startups and & future of work in an AI-driven world. LinkedIn Website ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook  TikTok Email Website

The Association 100 Podcast
The Year-Round Approach to Sponsorships in Associations

The Association 100 Podcast

Play Episode Listen Later Jul 31, 2024 39:07


Welcome back to another episode of The A100 podcast! In this insightful discussion, we dive into the world of sponsorships with two experts in the field: Dana Johnston, the Vice President of Client Partnerships and Trade Show Marketing for EMC Outdoor, and Bruce Rosenthal, a Corporate Partnership & Sponsorship Consultant and Founder & Convener of the Partnership Professionals Network. Key Highlights: Transforming Sponsorships from Transactional to Transformational: Discover how sponsorships can be a year-round responsibility, involving all departments within an association. Learn about the importance of moving beyond transactional relationships and building long-term, transformational partnerships that provide continuous value. Planning Ahead for Sponsorship Success: Find out why association teams should dedicate 3-4 hours per week to future planning and setting long-term goals for sponsorships. Understand the significance of maintaining an ongoing dialogue with industry partners to explore innovative and mutually beneficial opportunities. Engaging with Corporate Partners: Explore strategies for engaging corporate partners in meaningful discussions about their business objectives and how they align with the association's mission and member needs. Learn how to coach industry partners to focus on educational content and success stories rather than just brand visibility. Breaking Down Silos: Hear about the importance of cross-departmental collaboration in sponsorship initiatives and how associations can break down internal silos. Discover how to create an integrated approach to sponsorship that leverages the strengths of all departments to enhance member value and increase revenue. Innovative Sponsorship Strategies: Gain insights into keeping sponsorship programs fresh and forward-thinking, avoiding the trap of recycling old prospectuses. Learn practical tips on evaluating the success of sponsorship initiatives and sunset offerings that no longer meet strategic goals. Join us for this episode packed with actionable tips and deep insights from two leaders in sponsorship strategy, offering valuable lessons for association professionals looking to elevate their sponsorship programs. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes filled with expert advice and practical strategies to help your association thrive!

Scouting for Growth
Edward Brawer: Revolutionising Content Creation With AI-Powered Podcasting

Scouting for Growth

Play Episode Listen Later Jul 24, 2024 46:02


On this episode of the Scouting For Growth podcast, Sabine VdL talks to Edward Brawer, co-founder and CEO of PodcastAI, a groundbreaking SaaS platform that's revolutionising the podcasting industry. In this episode, we'll explore: Edward's entrepreneurial journey and the origin of PodcastAI, how AI is transforming the podcasting landscape, the future of content creation and distribution in the digital age, strategies for entrepreneurs looking to leverage podcasting for growth, & the challenges and opportunities in the rapidly evolving podcasting market. KEY TAKEAWAYS Our previous startup was with a video platform, we looked at what we could do with AI & the models were pretty expensive. It could take the titles of YouTube videos & it would give you new titles which isn't super useful. The real Aha moment for everybody was ChatGPT because it wasn't a technological development, it was realising how to use the technology. It was at that time I realised that there was no limit to what you could build with this which led me to start PodcastAI. In February I started playing around with AI voice models & I realised that I could create a parody of an episode of the All In Podcast. I posted it on Twitter & it was reposted by them & went viral, hitting 600,000 views. We did 6 in total & people wanted to know if it was real, if AI had produced the podcast, they asked for the code and GitHub repository. That's when we knew we could do it & we now have a product called ‘The Magic Pod' which creates a podcast for people who aren't comfortable with being in front of a microphone, it creates a completely automatically generated podcast based on a blog, news sites in your voice. Podcasting can be as simple or complicated as you want it to be. At its simplest, you can have a Zoom recording a put that out, PodcastAI can level up your production quality by automating all the post-production, distribution & promotion. At the higher end it's more like a scripted TV show. Everybody is able to do this. A100 years ago, the percentage of the population engaged in farming was easily double digit, now maybe 1% has to be engaged in farming because there are tractors etc. Not everybody became unemployed, they went on to do even greater stuff & increased the standard of living for everybody and earned higher wages. BEST MOMENTS ‘For most podcasters the interview is the fun part and the rest is less fun (pre- & post-production, editing, promotion & distribution, website CMS), Podcast AI makes podcasting effortless.' ‘Magic Pod is a whole new game. You don't even have to do the recording part, just give it a 3 minute sample of your voice, upload it into our system & it automatically goes out on the day & time you want.' ‘Today, podcasting is a $28billion market. In 5 years it's going to be $100billion.' ‘You'll have more throughput, higher quality overall outcomes of the work produced & agencies are going to be able to scale in a way they haven't been able to, it's going to become an enabling technology.' ABOUT THE GUEST Edward Brawer is the co-founder and CEO of PodcastAI, a cutting-edge SaaS platform revolutionising the podcasting industry through AI-powered automation. Launched in 2023, PodcastAI offers a comprehensive suite of tools for podcast creation, post-production, & distribution. Under Edward's leadership, PodcastAI has secured venture backing, with Jason Calacanis' Launch fund as the lead investor. The platform offers innovative features such as AI-generated ad reads, fully automated podcast episodes, & comprehensive post-production services, positioning itself at the forefront of the rapidly growing podcasting market. Website ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, and commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor &  multi-award winner. Twitter LinkedIn Instagram Facebook  TikTok Email Website

The Association 100 Podcast
Expert Email Marketing Tips for Associations

The Association 100 Podcast

Play Episode Listen Later Jul 24, 2024 27:55


Welcome back to another insightful episode of The A100 podcast! Today, we're joined by Amber Worthen, Founder and CEO of Email Maven. With years of experience in communication strategy, project management and email marketing, Amber shares her expertise in revolutionizing email marketing for associations. Key Highlights: Addressing Common Email Challenges: Amber identifies key issues such as inconsistent email design and email fatigue. She emphasizes the importance of applying best practices in email design and segmenting audiences to enhance engagement and avoid overwhelming members. Optimizing Open Rates and Engagement: Practical tips on segmenting email audiences and implementing drip campaigns to ensure targeted and relevant communication, resulting in higher open and click rates. Choosing the Right Email Platform: Amber advises associations to select email platforms that integrate seamlessly with their data systems to save time and improve efficiency. She underscores the importance of conducting thorough audits to identify the best fit for current and future needs. Leveraging Data for Better Decision Making: Emphasizing the importance of data-driven decisions, Amber highlights key metrics such as open rates, click rates, conversion rates and unsubscribe rates. She encourages associations to stay curious and continually analyze data to refine their email strategies. Emerging Trends and Practical AI Tips: Amber shares insights into emerging trends like AI, offering practical AI tips for creating alternative subject lines and preheaders. She also emphasizes the importance of continuously testing and optimizing friendly from names, subject lines and preheaders to enhance engagement. Best Practices for Effective Email Marketing: Tips on sending emails at the right times based on the audience's lifestyle and behaviors, and using web tracking campaigns to target members who show interest in specific content. Join us as Amber Worthen provides practical tips and deep insights for association professionals looking to enhance their email marketing strategies and engage their members more effectively. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes packed with insights to help your association thrive!

The Association 100 Podcast
Engaging Storytelling and Sustainability in Associations

The Association 100 Podcast

Play Episode Listen Later Jul 17, 2024 37:29


Welcome back to another episode of The A100 podcast! Today we're thrilled to have Ben H. Rome, Director of Communications for the American Bus Association (ABA), joining us. With nearly 30 years in storytelling and strategic communications, Ben shares his insights on the evolving role of storytelling in the association world and how ABA is tackling sustainability. Key Highlights: The Power of Storytelling: Ben discusses how storytelling in associations has evolved from simple slogans and stats to creating resonant narratives that engage members on a personal level. He emphasizes the importance of making stories compelling and memorable, ensuring they resonate long past the initial campaign. Authenticity in Communication: Authenticity is crucial in storytelling. Ben highlights the risks of inauthentic messaging and the power of genuine engagement to build trust and loyalty among members. Sustainability Initiatives: ABA's research shows that motor coaches are the most eco-friendly form of group transportation. Ben explains how they used compelling infographics and social media to highlight this on Earth Day, resulting in high engagement and positive media coverage. He also touches on the challenges and advocacy efforts related to the rapid push for electric vehicles and the need for proper infrastructure to support this transition. Choosing the Right Channels: Ben shares his approach to identifying the best communication channels for different segments of ABA's audience, emphasizing the importance of tailored messaging and engagement. AI and Technology in Communications: The role of AI in enhancing efficiency in content creation, analytics and member engagement is explored. Ben offers a balanced view on using AI responsibly to support, not replace, human creativity and judgment. Join us as Ben H. Rome delves into these critical areas, offering practical tips and deep insights for association professionals looking to enhance their communication strategies. Stay Connected: Subscribe to The Association 100 podcast on Spotify, Apple Podcasts or YouTube Podcasts to ensure you never miss an episode. Follow us on LinkedIn at The Association 100 and OnWrd & UpWrd for the latest in association trends and strategies. Tune in for more episodes packed with insights to help your association thrive!

Let's Talk AI
#171 - - Apple Intelligence, Dream Machine, SSI Inc

Let's Talk AI

Play Episode Listen Later Jun 24, 2024 124:01 Transcription Available


Our 171st episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris) Feel free to leave us feedback here. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter Tools & Apps(00:03:13) Apple Intelligence: every new AI feature coming to the iPhone and Mac (00:10:03) ‘We don't need Sora anymore': Luma's new AI video generator Dream Machine slammed with traffic after debut (00:14:48) Runway unveils new hyper realistic AI video model Gen-3 Alpha, capable of 10-second-long clips (00:18:21) Leonardo AI image generator adds new video mode — here's how it works (00:22:31) Anthropic just dropped Claude 3.5 Sonnet with better vision and a sense of humor Applications & Business(00:28:23 ) Sam Altman might reportedly turn OpenAI into a regular for-profit company (00:31:19) Ilya Sutskever, Daniel Gross, Daniel Levy launch Safe Superintelligence Inc. (00:38:53) OpenAI welcomes Sarah Friar (CFO) and Kevin Weil (CPO) (00:41:44) Report: OpenAI Doubled Annualized Revenue in 6 Months (00:44:30) AI startup Adept is in deal talks with Microsoft (00:48:55) Mistral closes €600m at €5.8bn valuation with new lead investor (00:53:12) Huawei Claims Ascend 910B AI Chip Manages To Surpass NVIDIA's A100, A Crucial Alternative For China (00:56:58) Astrocade raises $12M for AI-based social gaming platform Projects & Open Source(01:01:03) Announcing the Open Release of Stable Diffusion 3 Medium, Our Most Sophisticated Image Generation Model to Date (01:05:53) Meta releases flurry of new AI models for audio, text and watermarking (01:09:39) ElevenLabs unveils open-source creator tool for adding sound effects to videos Research & Advancements(01:12:02) Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling (01:22:07) Improve Mathematical Reasoning in Language Models by Automated Process Supervision (01:28:01) Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations (01:30:32) An Empirical Study of Mamba-based Language Models (01:31:57) BERTs are Generative In-Context Learners (01:33:33) SELFGOAL: Your Language Agents Already Know How to Achieve High-level Goals Policy & Safety(01:35:16) Sycophancy to subterfuge: Investigating reward tampering in language models (01:42:26) Waymo issues software and mapping recall after robotaxi crashes into a telephone pole (01:45:53) Meta pauses AI models launch in Europe (01:46:44) Refusal in Language Models Is Mediated by a Single Direction Sycophancy to subterfuge: Investigating reward tampering in language models (01:51:38) Huawei exec concerned over China's inability to obtain 3.5nm chips, bemoans lack of advanced chipmaking tools Synthetic Media & Art(01:55:07) It Looked Like a Reliable News Site. It Was an A.I. Chop Shop. (01:57:39) Adobe overhauls terms of service to say it won't train AI on customers' work (01:59:31) Buzzy AI Search Engine Perplexity Is Directly Ripping Off Content From News Outlets (02:02:23) Outro + AI Song 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
ICLR 2024 — Best Papers & Talks (ImageGen, Vision, Transformers, State Space Models) ft. Christian Szegedy, Ilya Sutskever, Durk Kingma

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 27, 2024 218:03


Speakers for AI Engineer World's Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we've been studying the best ML research conferences so we can make the best AI industry conf! Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.ICLR 2024 took place from May 6-11 in Vienna, Austria. Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.As we did last year, we'll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré's spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.We had a blast at ICLR 2024 and you can bet that we'll be back in 2025

Peter Navarro‘s In Trump Time Podcast
A Nvidia Market, Shades of Cisco and Nortel Circa 1999

Peter Navarro‘s In Trump Time Podcast

Play Episode Listen Later Feb 24, 2024 11:11


VISIT HTTP://PETERNAVARRO.SUBSTACK.COM FOR THE TRANSCIPT AND MORE! PLEASE WRITE A REVIEW -- AND SPARE ME THE PRISON JOKES LIBTARDS. One of the geopolitical risks that Nvidia itself faces TODAY is from Communist China. For starters, China accounts for about 25% of Nvidia's revenue for its data center business, which is the largest operation at the company. Anything from increased sanctions on China to a catastrophic war with Taiwan would obviously hit Nvidia hard. Perhaps the biggest threat, however, is the US government – and rightly so. Right now, the Biden administration is trying to curb Communist China's access to technology, and specifically AI which China intends to fully use for military uses. In thumbing its nose at those sanctions by running and runs around those sanctions, Nvidia risks a crackdown that, truth be told is long overdue. As to why the Biden regime continues to allow the Chinese military and state researched institutes of artificial intelligence to continue to buy the coveted A100 and H 100 Nvidia chips is a mystery.