Podcasts about AlphaGo

Artificial intelligence that plays Go

  • 473PODCASTS
  • 717EPISODES
  • 43mAVG DURATION
  • 1WEEKLY EPISODE
  • Feb 25, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about AlphaGo

Latest podcast episodes about AlphaGo

Frekvenca X
Parmy Olson: Umetna inteligenca skrenila s poti za dobro dobička, ne človeštva

Frekvenca X

Play Episode Listen Later Feb 25, 2026 45:36


Začelo se je s plemenito vizijo o tehnologiji za dobrobit človeštva, končalo pa z mastnim zaslužkom največjih tehnoloških velikanov. Tako nekako lahko strnemo osrednjo idejo knjige Prevlada avtorice Parmy Olson o orodjih umetne inteligence, ki so v zadnjih letih obrnila svet na glavo. Prisluhnite intervjuju z njo, v katerem strnemo zgodbo ustanoviteljev podjetij DeepMind in OpenAI Demisa Hassabisa in Sama Altmana, ki stojita za orodji, kot sta Chat GPT in AlphaGo, razmišljamo pa tudi o tem, ali lahko takšna tehnologija sploh kdaj zares uide korporativnim interesom. Gostja: Parmy Olson, novinarka (Bloomberg) in avtorica knjige 'Prevlada: umetna inteligenca, ChatGPT in tekma, ki bo spremenila svet'. Knjiga je v prevodu Sama Kuščerja dostopna tudi v slovenskem jeziku. V Xpertizi (39:31) se predstavlja Anita Bolčevič, raziskovalka na področju turizma, FKBV UM. Avtorstvo fotografije na naslovnici podkasta: Kim Farinha     Poglavja: 00:00:01 Uvod 00:01:53 Parmy Olson in kaj jo je navdušilo za poročanje o tehnologiji 00:05:38 Kdo sta Sam Altman in Demis Hassabis 00:11:24 Na prizorišče stopita Google in Microsoft 00:14:41 Kakšna je bila vloga Elona Muska? 00:16:43 Google in njegov Goljatov paradoks 00:17:45 Kitajska noče zaostajati 00:20:55 Kakšna je dejanska tržna vrednost umetne inteligence 00:24:39 Zakaj je regulacija umetne inteligence tako težavna? 00:30:06 Negotov položaj novopečenih diplomantov ali kdo bo opravljal prakso? 00:33:30 Umetna inteligenca, njena 'empatija' in skriti interesi v ozadju 00:36:27 UI uporabljamo za preverjanje lastnih idej, ne njihovo generiranje 00:39:31 Xpertiza: Anita Bolčevič

StarTalk Radio
The Origins of Artificial Intelligence with Geoffrey Hinton

StarTalk Radio

Play Episode Listen Later Feb 20, 2026 91:24


How did we go from digital computers to AI seemingly everywhere? Neil deGrasse Tyson, Chuck Nice, & Gary O'Reilly dive into the mechanics of thinking, how AI got its start, and what deep learning really means with cognitive and computer scientist, Nobel Laureate, and one of the architects of AI, Geoffrey Hinton. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Le monde de demain - The Flares [PODCASTS]
# 68 - Pourquoi j'ai fait une grève de la faim contre l'IA – avec Michaël Trazzi

Le monde de demain - The Flares [PODCASTS]

Play Episode Listen Later Feb 11, 2026 49:46


➡️ Passez à l'action sur les risques de l'IA : En quelques clics, alertez vos élus et envoyez le modèle de lettre préparé. C'est automatisé pour un minimum d'effort: https://taap.it/TF-PauseIACampagnes ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la

Andrew Huberman - Audio Biography
Dopamine, Serotonin, and Decision Making: Inside the Brain's Reward System

Andrew Huberman - Audio Biography

Play Episode Listen Later Feb 7, 2026 2:38 Transcription Available


Andrew Humberman BioSnap a weekly updated Biography.Andrew Huberman, the Stanford neuroscientist and Huberman Lab podcast host, dropped a pair of fresh episodes this week that have fans buzzing. On February 2, he welcomed Dr. Read Montague to unpack how dopamine and serotonin drive decisions, motivation, and learning, diving into real-time brain scans and AI parallels like AlphaGo in a chat laced with personal anecdotes from their 15-year reconnection, as detailed on the Huberman Lab site and Singju Post transcript. Just days later on February 5, Huberman released an Essentials episode with movement guru Ido Portal, breaking down nervous system tricks for better motion, panoramic vision drills, and playful exploration to rewire habits, straight from hubermanlab.com.Business-wise, Mens Journal spotlighted Hubermans five core health pillars sleep, sunlight, movement, nutrition, and relationships on February 3, pulling from his podcast wisdom to pitch them as no-nonsense basics over trendy biohacks. A viral YouTube short from Iain Barton Shorts that same day clipped Huberman on neuroplasticity focus exercises, racking up views with his tips for daily visual drills to sharpen concentration.Hes also in the medias crosshairs amid the Epstein files fallout. Plant Based News flagged him January 31 as a wellness bro tied to Peter Attia, whose 1700-plus Epstein mentions including flirty emails surfaced recently, though Huberman himself faces no direct links there. Katie Couric Media critiqued on January 28 his CBS News contributor gig alongside Attia and Mark Hyman, slamming their supplement-pushing protocols as overhyped with conflicts, yet CBS kept them post-scandal. Willamette Week noted on February 3 his 2023 podcast collab with Epstein-linked psychiatrist Paul Conti, stirring guilt-by-association whispers. No public appearances or direct social mentions popped in the last few days, but his feeds hum with blueprint emails to over a million subs. Speculation swirls on long-term bio rep hits from the influencer scrutiny, but Hubermans output stays relentless.Get the best deals https://amzn.to/3ODvOtaThis content was created in partnership and with the help of Artificial Intelligence AI

Kapital
K200. Leontxo García. Pensar como un ajedrecista

Kapital

Play Episode Listen Later Jan 23, 2026 113:56


Leontxo García es el invitado perfecto para conmemorar los 200 episodios de Kapital. Leontxo es seguramente la persona que más sabe sobre ajedrez en España. Yo jugaba de pequeño con mi padre y guardo un bonito recuerdo de esas partidas. Si hoy pienso distinto, algo de culpa tienen ese juego. ¿Qué es lo que aprendí? No sabría decírtelo, pero algo siempre queda. Una conexión inesperada, quizá simplemente trabajé la paciencia. Leontxo está convencido del poder pedagógico del ajedrez y defiende que deberíamos fomentarlo en las escuelas. Siempre regresamos, en este podcast, a La utilidad de lo inútil del gran Nuccio Ordine. Leontxo recuerda movimientos del gran maestro cubano Capablanca en partidas que se jugaron hace más de 100 años y yo refuerzo mi tesis que la diferenciación siempre viene por el camino menos pensado.Kapital llega a los 200 episodios y quería simplemente darte las gracias.Quiero pensar que existe una filosofía alrededor de este podcast: en hacer las cosas sin buscar una ganancia directa, en perder una tarde charlando de todo y de nada, en saber disfrutar del tiempo que pasa. Una filosofía que nos une en una forma particular de ver y entender la vida. Te habrás fijado que raramente confronto a los invitados y es que el espíritu de Kapital es la curiosidad. ¿Quién soy yo para discutir una idea? Yo solo quiero saber cómo piensa esa persona, cómo entiende el mundo en el que vive. La premisa de Kapital es que todos los invitados esconden una lección, desde Cao de Benós hasta Llados, pasando por Raggio, todos tienen algo que puede ser de tu interés, si escuchas con atención. Como si fuera esto un acertijo, mi reto es encontrar la pregunta correcta que destape esa verdad escondida. Gracias por jugar a este juego.Índice:0:32 En Estados Unidos el póker, en Rusia el ajedrez y en China el Go.3:36 El plan de Lenin para las escuelas.11:46 Control del primer impulso a través del ajedrez.20:33 La utilidad de lo inútil, de nuevo.27:10 No puedes escribir una novela romántica si no te rompieron el corazón antes.31.36 El mal uso del ChatGPT.42:27 Orígenes del ajedrez.53:02 Maestros antiguos.1:02:44 Karpov contra Kasparov.1:17:28 Que la suerte te pille preparado.1:25:32 Kasparov como Don Quijote.1:35:18 DeepBlue y AlphaGo.1:41:15 Agotamiento físico de pensar.1:51:10 Apúntate a un club de ajedrez.Apuntes:Ajedrez y ciencia, pasiones mezcladas. Leontxo García.Pensar rápido, pensar despacio. Daniel Kahneman.Putting your intuition on ice. Daniel Kahneman.La diagonal del loco. Richard Dembo.Todas las historias y un epílogo. Enric González.Historias del Calcio. Enric González.Un verdor terrible. Benjamín Labatut.MANIAC. Benjamín Labatut.

ACM ByteCast
Andrew Barto and Richard Sutton - Episode 80

ACM ByteCast

Play Episode Listen Later Jan 14, 2026 42:39


In this episode of ACM ByteCast, Rashmi Mohan hosts 2024 ACM A.M. Turing Andrew laureates Andrew Barto and Richard Sutton. They received the Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning, a computational framework that underpins modern AI systems such as AlphaGo and ChatGPT. Barto is Professor Emeritus in the Department of Information and Computer Sciences at the University of Massachusetts, Amherst. His honors include the UMass Neurosciences Lifetime Achievement Award, the IJCAI Award for Research Excellence, and the IEEE Neural Network Society Pioneer Award. He is a Fellow of IEEE and AAAS. Sutton is a Professor in Computing Science at the University of Alberta, a Research Scientist at Keen Technologies (an artificial general intelligence company) and Chief Scientific Advisor of the Alberta Machine Intelligence Institute (Amii). In the past he was a Distinguished Research Scientist at Deep Mind and served as a Principal Technical Staff Member in the AI Department at the AT&T Shannon Laboratory. His honors include the IJCAI Research Excellence Award, a Lifetime Achievement Award from the Canadian Artificial Intelligence Association, and an Outstanding Achievement in Research Award from the University of Massachusetts at Amherst. Sutton is a Fellow of the Royal Society of London, AAAI, and the Royal Society of Canada. In the interview, Andrew and Richard reflect on their long collaboration together and the personal and intellectual paths that led both researchers into CS and reinforcement learning (RL), a field that was once largely neglected. They touch on interdisciplinary explorations across psychology (animal learning), control theory, operations research, cybernetics, and how these inspired their computational models. They also explain some of their key contributions to RL, such as temporal difference (TD) learning and how their ideas were validated biologically with observations of dopamine neurons. Barto and Sutton trace their early research to later systems such as TD-Gammon, Q-learning, and AlphaGo and consider the broader relationship between humans and reinforcement learning-based AI, and how theoretical explorations have evolved into impactful applications in games, robotics, and beyond.

Antosh Dyade
The Nobel Prize of Computing: Inside the ACM A.M. Turing Award (English)

Antosh Dyade

Play Episode Listen Later Jan 14, 2026 45:07


The ACM A.M. Turing Award is universally recognized as the "Nobel Prize of Computing" and stands as the highest distinction in the field of computer science. Presented annually by the Association for Computing Machinery, it honors individuals whose technical contributions have had a lasting and major importance to the digital world.The award is named in honor of Alan Mathison Turing, the British mathematician and "Father of Computer Science". Turing provided the formal foundations for computation with the Universal Turing Machine and played a pivotal role in the Allied victory during World War II by leading the effort to decrypt the Enigma cipher.The most recent recipients (2024) are Andrew Barto and Richard Sutton, recognized for their groundbreaking work in reinforcement learning. Their research allows machines to learn through trial and error, serving as a central pillar for the modern AI boom and powering massive breakthroughs like AlphaGo and ChatGPT.Turing Award Fast Facts:• The Prize: Winners receive $1 million, with current financial support provided by Google, Inc..• The First: The inaugural award was given to Alan Perlis in 1966 for his influence on advanced programming and compilers.• Women in Computing: Only three women have ever received the honor: Frances Allen (2006), Barbara Liskov (2008), and Shafi Goldwasser (2012).• The Elite Network: Turing Laureates are exceptionally well-connected; on average, a winner is separated from another laureate or von Neumann Medal winner by only 1.4 co-authorship steps.• Academic Foundations: Approximately 61% of laureates hold degrees in mathematics, reflecting the discipline's deep roots in mathematical logic.• Age Trends: While the youngest winner, Donald Knuth, was only 36, the average age of recipients has trended upward toward 70 in recent years.From the invention of the World Wide Web and the C programming language to the foundations of Artificial Intelligence, the Turing Award documents the history of the information age.#TuringAward #ComputerScience #AI #AlanTuring #TechHistory #ReinforcementLearning #ChatGPT #Innovation #Coding #STEM

Antosh Dyade
The Nobel Prize of Computing: Inside the ACM A.M. Turing Award (Hindi)

Antosh Dyade

Play Episode Listen Later Jan 14, 2026 15:16


The ACM A.M. Turing Award is universally recognized as the "Nobel Prize of Computing" and stands as the highest distinction in the field of computer science. Presented annually by the Association for Computing Machinery, it honors individuals whose technical contributions have had a lasting and major importance to the digital world.The award is named in honor of Alan Mathison Turing, the British mathematician and "Father of Computer Science". Turing provided the formal foundations for computation with the Universal Turing Machine and played a pivotal role in the Allied victory during World War II by leading the effort to decrypt the Enigma cipher.The most recent recipients (2024) are Andrew Barto and Richard Sutton, recognized for their groundbreaking work in reinforcement learning. Their research allows machines to learn through trial and error, serving as a central pillar for the modern AI boom and powering massive breakthroughs like AlphaGo and ChatGPT.Turing Award Fast Facts:• The Prize: Winners receive $1 million, with current financial support provided by Google, Inc..• The First: The inaugural award was given to Alan Perlis in 1966 for his influence on advanced programming and compilers.• Women in Computing: Only three women have ever received the honor: Frances Allen (2006), Barbara Liskov (2008), and Shafi Goldwasser (2012).• The Elite Network: Turing Laureates are exceptionally well-connected; on average, a winner is separated from another laureate or von Neumann Medal winner by only 1.4 co-authorship steps.• Academic Foundations: Approximately 61% of laureates hold degrees in mathematics, reflecting the discipline's deep roots in mathematical logic.• Age Trends: While the youngest winner, Donald Knuth, was only 36, the average age of recipients has trended upward toward 70 in recent years.From the invention of the World Wide Web and the C programming language to the foundations of Artificial Intelligence, the Turing Award documents the history of the information age.#TuringAward #ComputerScience #AI #AlanTuring #TechHistory #ReinforcementLearning #ChatGPT #Innovation #Coding #STEM

Conectando Puntos
Episodio 249: La jaula de hierro algorítmica

Conectando Puntos

Play Episode Listen Later Jan 8, 2026 40:38


Tras un largo silencio que parece haber suspendido el tiempo mismo, regresamos para constatar que, aunque nosotros nos detuvimos, la inercia del mundo y sus automatismos no lo hicieron. ¿Es posible que estemos habitando ya el interior de una estructura invisible que prioriza la eficiencia sobre la libertad? ¿Hemos cruzado ya el punto de no retorno donde los algoritmos no solo nos asisten, sino que nos gobiernan sin darnos una explicación? Conexiones imposibles y un poco de filosofIA para esta vuelta a los escenarios que tanta ilusión nos hacía. Recordemos que todo acaba y todo empieza en el Episodio 248: El punto de no retorno algorítmico: El antecedente directo donde se plantea el umbral en el que perdemos el control sobre sistemas esenciales. Estos son los contenidos para seguir conectando puntos: Bulletin of the Atomic Scientists – Doomsday Clock: El Reloj del Apocalipsis no es una mera herramienta simbólica; es un recordatorio que hemos pasado por alto durante demasiado tiempo. Desde 1947, científicos de primer nivel evalúan anualmente cuán cerca estamos de la medianoche, esa destrucción catastrófica que representaba inicialmente solo amenazas nucleares. Lo que nos fascina del episodio es cómo este reloj ha evolucionado para incluir amenazas que los abuelos de estos científicos jamás contemplaron: inteligencia artificial, cambios climáticos, biología disruptiva. En 2025, por primera vez en 78 años, el reloj se posicionó a 89 segundos de la medianoche. Un único segundo de diferencia respecto a 2024, pero un gesto que dice todo: la IA no es una amenaza futura, está aquí, ahora, acelerando riesgos que ya parecían irremontables. AESIA – Agencia Española de Supervisión de la Inteligencia Artificial: España ha impulsado un organismo dedicado exclusivamente a supervisar la IA. La AESIA es una institución con poder real para exigir explicabilidad, para inspeccionar sistemas de riesgo alto, para establecer que los algoritmos no pueden ser cajas negras perpetuas. Comenzó operaciones en 2025 cuando Europa aprobaba su directiva sobre IA. Lo que el episodio subraya es algo crucial: la regulación llega tarde. Mientras AESIA inspecciona sistemas nuevos, más de mil algoritmos médicos antiguos siguen operando sin cumplir esos requisitos de transparencia. Civio – Sentencia BOSCO y Transparencia Algorítmica: Una organización de vigilancia ciudadana llevó al Tribunal Supremo español un caso que iba a cambiar algo fundamental: el acceso al código fuente de BOSCO, el algoritmo que decide quién recibe ayuda eléctrica y quién no. Durante años, el Gobierno argumentó seguridad nacional, propiedad intelectual, secretos comerciales. El Supremo ha dicho que no. La sentencia de 2025 estableció jurisprudencia: la transparencia algorítmica es un derecho democrático. Los algoritmos que condicionan derechos sociales no pueden ser opacos. Por primera vez, un tribunal de alto nivel reconoce que vivimos en una «democracia digital» donde los ciudadanos tienen derecho a fiscalizar, a conocer, a entender cómo funciona la máquina que decide sobre sus vidas. BOSCO era apenas un ejemplo. La sentencia abre la puerta a exigencias de transparencia sobre cualquier sistema que use la administración pública para decisiones automatizadas. Es pequeño, increíblemente importante, y probablemente insuficiente. Reshuffle: Who Wins When AI Restacks the Knowledge Economy – Sangeet Paul Choudary: Este libro es exactamente lo que necesitábamos leer antes de grabar este episodio. Choudary no habla de cómo la IA automatiza tareas; habla de cómo la IA remodela el orden completo de cómo trabajamos, cómo nos coordinamos, cómo creamos valor. «Reshuffle» no es un catálogo de miedos; es un análisis de cómo nuevas formas de coordinación sin control centralizado están emergiendo. El libro conecta con lo que discutimos sobre la opacidad: no es solo que los algoritmos sean opacos, es que están reorganizando estructuras organizacionales enteras. Choudary habla de empresas que ya no saben quién es responsable de qué porque las máquinas coordinan sin necesidad de consenso humano. Es Max Weber acelerado a velocidad de red neuronal. The Thinking Game – Documental sobre Demis Hassabis y DeepMind: Un documental que filma la persecución de una obsesión: Demis Hassabis pasó su vida entera buscando resolver la inteligencia. The Thinking Game, producido por el equipo que creó AlphaGo, muestra cinco años dentro de DeepMind, los momentos cruciales en que la IA saltó de juegos a resolver problemas biológicos reales con AlphaFold. Lo que duele ver aquí es que Hassabis resolvió un problema de 50 años en biología y lo open-sourceó. La pregunta incómoda es: ¿cuántos otros Hassabis están dentro de laboratorios corporativos con incentivos inversos, guardando secretos? The Thinking Game es un retrato de lo que podría ser si el impulso científico ganara sobre el extractivo. Recomendamos verlo antes de cualquier conversación sobre dónde está realmente el avance en IA. Las horas del caos: La DANA. Crónica de una tragedia: Sergi Pitarch reconstruye hora a hora el 29 de octubre de 2024, el día en que la DANA arrasó Valencia. Lo que hace diferente a este libro es que no solo cuenta lo que sucedió; documenta lo que no se hizo, quién fue responsable de silenciar advertencias, qué decisiones fueron tomadas en salas oscuras mientras miles quedaban atrapados. Es una crónica periodística larga en el estilo norteamericano de investigación profunda. Lo conectamos al episodio porque la tragedia de Valencia es un espejo: sistemas con algoritmos que debían predecir, equipos de emergencia que debían comunicar, protocolos que debían activarse. Pero hubo silencios, opacidades, dilución de responsabilidad. Exactamente lo que sucede cuando los algoritmos fallan sin que nadie sepa quién paga el precio. Pitarch escribe para que las víctimas no caigan en el olvido y para que la siguiente tragedia no se repita con la misma negligencia. Anatomía de un instante: Serie basada en el libro de Javier Cercas, que examina el 23-F español, el golpe militar de 1981, pero lo hace como psicólogo de la historia: ¿qué es lo que convierte a un hombre en héroe en un instante crucial? Lo traemos aquí porque el libro trata sobre cómo nuestros sistemas, nuestras instituciones, nuestras estructuras de poder están sostenidas por momentos impredecibles, por acciones individuales que los algoritmos no pueden modelar. La IA promete predecibilidad, certeza, orden. Cercas nos recuerda que la historia es una disciplina de lo impredecible, que los instantes que nos definen no salen de una ecuación. Una nota final: Gracias por estar aquí. Un año después, sin Delorean, sin viaje temporal, pero con la certeza de que mientras buscábamos retroceder, el mundo siguió avanzando. Eso era el verdadero experimento: comprobar si podíamos volver a conectar puntos después de doce meses de que los algoritmos siguieran escribiendo el guión. La respuesta es sí. Pero la pregunta más incómoda permanece: ¿sabemos realmente dónde estamos en esa jaula de hierro? ¿O solo acabamos de darnos cuenta de que hay paredes? Para contactar con nosotros, podéis utilizar nuestra cuenta de twitter (@conectantes), Instagram (conectandopuntos) o el formulario de contacto de nuestra web conectandopuntos.es. Nos podéis escuchar en iVoox, en iTunes o en Spotify (busca por nuestro nombre, es fácil). Créditos del programa Intro: Stefan Kanterberg ‘By by baby‘ (licencia CC Atribución). Cierre: Stefan Kanterberg ‘Guitalele's Happy Place‘ (licencia CC Atribución). Foto: Creada con IA ¿Quieres patrocinar este podcast? Puedes hacerlo a través de este enlace La entrada Episodio 249: La jaula de hierro algorítmica se publicó primero en Conectando Puntos.

The Cloud Pod
337: AWS Discovers Prices Can Go Both Ways, Raises GPU Costs 15 Percent

The Cloud Pod

Play Episode Listen Later Jan 6, 2026 52:01


 Welcome to episode 337 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and Ryan have hit the recording studio to bring you all the latest in cloud and AI news, from acquisitions and price hikes to new tools that Ryan somehow loves but also hates? We don't understand either… but let's get started!  Titles we almost went with this week Prompt Engineering Our Way Into Trouble The Demo Worked Yesterday, We Swear It Scales Horizontally, Trust Us Responsible AI But Terrible Copy (Marketing Edition) General News  00:58 Watch ‘The Thinking Game' documentary for free on YouTube Google DeepMind is releasing the “The Thinking Game” documentary for free on YouTube starting November 25, marking the fifth anniversary of AlphaFold.  The feature-length film provides behind-the-scenes access to the AI lab and documents the team’s work toward artificial general intelligence over five years. The documentary captures the moment when the AlphaFold team learned they had solved the 50-year protein folding problem in biology, a scientific achievement that recently earned Demis Hassabis and John Jumper the Nobel Prize in Chemistry.  This represents one of the most significant practical applications of deep learning to fundamental scientific research. The film was produced by the same award-winning team that created the AlphaGo documentary, which chronicled DeepMind’s earlier achievement in mastering the game of Go. For cloud and AI practitioners, this offers insight into how Google DeepMind approaches complex AI research problems and the development process behind their models. While this is primarily a documentary release rather than a technical product announcement, it provides context for understanding Google’s broader AI strategy and the research foundation underlying its cloud AI services. The AlphaFold model itself is available through Google Cloud for protein structure prediction workloads. 01:54 Justin – “If you're not into technology, don't care about any of that, and don't care about AI and how they built all the AI models that are now powering the world of LLMs we have, you will not like this documentary.”  04:22 ServiceNow to buy Armis in $7.7 billion security deal • The Register ServiceNow is acquiring Armis for $7.75 billion to integrate real-time security intelligence with its Configuration Management Database, allowing customers to identify vulnerabilities across IT, OT, and medical devices and remediate them through automated workflows. 

Crazy Wisdom
Episode #516: China's AI Moment, Functional Code, and a Post-Centralized World

Crazy Wisdom

Play Episode Listen Later Dec 22, 2025 64:59


In this episode, Stewart Alsop sits down with Joe Wilkinson of Artisan Growth Strategies to talk through how vibe coding is changing who gets to build software, why functional programming and immutability may be better suited for AI-written code, and how tools like LLMs are reshaping learning, work, and curiosity itself. The conversation ranges from Joe's experience living in China and his perspective on Chinese AI labs like DeepSeek, Kimi, Minimax, and GLM, to mesh networks, Raspberry Pi–powered infrastructure, decentralization, and what sovereignty might mean in a world where intelligence is increasingly distributed. They also explore hallucinations, AlphaGo's Move 37, and why creative “wrongness” may be essential for real breakthroughs, along with the tension between centralized power and open access to advanced technology. You can find more about Joe's work at https://artisangrowthstrategies.com and follow him on X at https://x.com/artisangrowth.Check out this GPT we trained on the conversationTimestamps00:00 – Vibe coding as a new learning unlock, China experience, information overload, and AI-powered ingestion systems05:00 – Learning to code late, Exercism, syntax friction, AI as a real-time coding partner10:00 – Functional programming, Elixir, immutability, and why AI struggles with mutable state15:00 – Coding metaphors, “spooky action at a distance,” and making software AI-readable20:00 – Raspberry Pi, personal servers, mesh networks, and peer-to-peer infrastructure25:00 – Curiosity as activation energy, tech literacy gaps, and AI-enabled problem solving30:00 – Knowledge work superpowers, decentralization, and small groups reshaping systems35:00 – Open source vs open weights, Chinese AI labs, data ingestion, and competitive dynamics40:00 – Power, safety, and why broad access to AI beats centralized control45:00 – Hallucinations, AlphaGo's Move 37, creativity, and logical consistency in AI50:00 – Provenance, epistemology, ontologies, and risks of closed-loop science55:00 – Centralization vs decentralization, sovereign countries, and post-global-order shifts01:00:00 – U.S.–China dynamics, war skepticism, pragmatism, and cautious optimism about the futureKey InsightsVibe coding fundamentally lowers the barrier to entry for technical creation by shifting the focus from syntax mastery to intent, structure, and iteration. Instead of learning code the traditional way and hitting constant friction, AI lets people learn by doing, correcting mistakes in real time, and gradually building mental models of how systems work, which changes who gets to participate in software creation.Functional programming and immutability may be better aligned with AI-written code than object-oriented paradigms because they reduce hidden state and unintended side effects. By making data flows explicit and preventing “spooky action at a distance,” immutable systems are easier for both humans and AI to reason about, debug, and extend, especially as code becomes increasingly machine-authored.AI is compressing the entire learning stack, from software to physical reality, enabling people to move fluidly between abstract knowledge and hands-on problem solving. Whether fixing hardware, setting up servers, or understanding networks, the combination of curiosity and AI assistance turns complex systems into navigable terrain rather than expert-only domains.Decentralized infrastructure like mesh networks and personal servers becomes viable when cognitive overhead drops. What once required extreme dedication or specialist knowledge can now be done by small groups, meaning that relatively few motivated individuals can meaningfully change communication, resilience, and local autonomy without waiting for institutions to act.Chinese AI labs are likely underestimated because they operate with different constraints, incentives, and cultural inputs. Their openness to alternative training methods, massive data ingestion, and open-weight strategies creates competitive pressure that limits monopolistic control by Western labs and gives users real leverage through choice.Hallucinations and “mistakes” are not purely failures but potential sources of creative breakthroughs, similar to AlphaGo's Move 37. If AI systems are overly constrained to consensus truth or authority-approved outputs, they risk losing the capacity for novel insight, suggesting that future progress depends on balancing correctness with exploratory freedom.The next phase of decentralization may begin with sovereign countries before sovereign individuals, as AI enables smaller nations to reason from first principles in areas like medicine, regulation, and science. Rather than a collapse into chaos, this points toward a more pluralistic world where power, knowledge, and decision-making are distributed across many competing systems instead of centralized authorities.

Medyascope.tv Podcast
Yapay zekâ "Go" oynayarak ne kazandı? Mehmet Emin Barsbey anlatıyor | Netizen

Medyascope.tv Podcast

Play Episode Listen Later Dec 16, 2025 32:27


GO oyunu neden “en saf strateji oyunu” olarak görülüyor? Netizen programında Atıf Ünaldı'nın konuğu İstanbul Go Kulübü kurucusu Mehmet Emin Barsbey, Go'nun 4 bin yıllık tarihini, satrançtan farklarını, Sun Tzu'nun savaş anlayışıyla ilişkisini ve yapay zekâ–AlphaGo kırılmasını anlatıyor. Ünaldı ve Barsbey, bu bölümde strateji, dikkat, algoritma ve sezgisel düşünme üzerine derinlikli bir sohbet gerçekleştiriyor. Learn more about your ad choices. Visit megaphone.fm/adchoices

Aperture
The AI Takeover Is Closer Than You Think

Aperture

Play Episode Listen Later Nov 25, 2025 29:28


AI experts from all around the world believe that given its current rate of progress, by 2027, we may hit the most dangerous milestone in human history. The point of no return, when AI could stop being a tool and start improving itself beyond our control. A moment when humanity may never catch up. 00:00 The AI Takeover Is Closer Than You Think01:05 The rise of AI in text, art & video02:00 What is the Technological Singularity?04:06 AI's impact on jobs & economy05:31 What happens when AI surpasses human intellect08:36 AlphaGo vs world champion Lee Sedol11:10 Can we really “turn off” AI?12:12 Narrow AI vs Artificial General Intelligence (AGI)16:39 AGI (Artificial General Intelligence)18:01 From AGI to Superintelligence20:18 Ethical concerns & defining intelligence22:36 Neuralink and human-AI integration25:54 Experts warning of 2027 AGI

ai takeover ethical neuralink agi closer than you think artificial general intelligence alphago technological singularity agi artificial general intelligence narrow ai
Future of Education Podcast: Parental guide to cultivating your kids’ academics, life skill development, & emotional growth

In Part 2 of this lively debate, MacKenzie and producer Jay address the real concerns you've voiced- turning kids into entrepreneurs at the expense of childhood, whether paying students kills motivation, Alpha's cost, and skepticism born from past EdTech failures. Jay plays the skeptic, MacKenzie defends the vision, and if you've ever had reservations about our philosophy, you won't want to miss this series.

This Week in Google (MP3)
IM 844: Poob Has It For You - Spiky Superintelligence vs. Generality

This Week in Google (MP3)

Play Episode Listen Later Nov 6, 2025 163:50


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

All TWiT.tv Shows (MP3)
Intelligent Machines 844: Poob Has It For You

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 6, 2025 163:20 Transcription Available


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

Radio Leo (Audio)
Intelligent Machines 844: Poob Has It For You

Radio Leo (Audio)

Play Episode Listen Later Nov 6, 2025 163:20


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

This Week in Google (Video HI)
IM 844: Poob Has It For You - Spiky Superintelligence vs. Generality

This Week in Google (Video HI)

Play Episode Listen Later Nov 6, 2025 163:20


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

All TWiT.tv Shows (Video LO)
Intelligent Machines 844: Poob Has It For You

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 6, 2025 163:20 Transcription Available


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

Radio Leo (Video HD)
Intelligent Machines 844: Poob Has It For You

Radio Leo (Video HD)

Play Episode Listen Later Nov 6, 2025 163:20


Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM

The MAD Podcast with Matt Turck
Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)

The MAD Podcast with Matt Turck

Play Episode Listen Later Oct 23, 2025 69:56


Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today's AI models can spark alien insights in code, math, and science—including Julian's timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart's law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic's launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We're not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton's “RL from scratch”? Julian's take(23:03) Julian's path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today's agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What's missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts

tech 45'
Hors-série #7 - 220M$ pour développer des agents autonomes - Lauren Sifre (H Compagny) PARTENARIAT

tech 45'

Play Episode Listen Later Oct 13, 2025 30:56


Salut bienvenue ! Hors-Série cette semaine, je te propose 3 entretiens avec des pépites françaises dans l'IA, j'ai tourné cela il y a quelques jours à Station F à l'occasion du GenAI Day qu'organisait AWS, partenaire de tech 45', l'expérience se poursuit jusqu'au 21/10, ça se passe par ici

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers,For most of history, stagnation — not growth — was the rule. To explain why prosperity so often stalls, economist Carl Benedikt Frey offers a sweeping tour through a millennium of innovation and upheaval, showing how societies either harness — or are undone by — waves of technological change. His message is sobering: an AI revolution is no guarantee of a new age of progress.Today on Faster, Please! — The Podcast, I talk with Frey about why societies midjudge their trajectory and what it takes to reignite lasting growth.Frey is a professor of AI and Work at the Oxford Internet Institute and a fellow of Mansfield College, University of Oxford. He is the director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.He is the author of several books, including the brand new one, How Progress Ends: Technology, Innovation, and the Fate of Nations.In This Episode* The end of progress? (1:28)* A history of Chinese innovation (8:26)* Global competitive intensity (11:41)* Competitive problems in the US (15:50)* Lagging European progress (22:19)* AI & labor (25:46)Below is a lightly edited transcript of our conversation. The end of progress? (1:28). . . once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.Pethokoukis: Since 2020, we've seen the emergence of generative AI, mRNA vaccines, reusable rockets that have returned America to space, we're seeing this ongoing nuclear renaissance including advanced technologies, maybe even fusion, geothermal, the expansion of solar — there seems to be a lot cooking. Is worrying about the end of progress a bit too preemptive?Frey: Well in a way, it's always a bit too preemptive to worry about the future: You don't know what's going to come. But let me put it this way: If you had told me back in 1995 — and if I was a little bit older then — that computers and the internet would lead to a decade streak of productivity growth and then peter out, I would probably have thought you nuts because it's hard to think about anything that is more consequential. Computers have essentially given people the world's store of knowledge basically in their pockets. The internet has enabled us to connect inventors and scientists around the world. There are few tools that aided the research process more. There should hardly be any technology that has done more to boost scientific discovery, and yet we don't see it.We don't see it in the aggregate productivity statistics, so that petered out after a decade. Research productivity is in decline. Measures of breakthrough innovation is in decline. So it's always good to be optimistic, I guess, and I agree with you that, when you say AI and when you read about many of the things that are happening now, it's very, very exciting, but I remain somewhat skeptical that we are actually going to see that leading to a huge revival of economic growth.I would just be surprised if we don't see any upsurge at all, to be clear, but we do have global productivity stagnation right now. It's not just Europe, it's not just Britain. The US is not doing too well either over the past two decades or so. China's productivity is probably in the negative territory or stagnant, by more optimistic measures, and so we're having a growth problem.If tech progress were inevitable, why have predictions from the '90s, and certainly earlier decades like the '50s and '60s, about transformative breakthroughs and really fast economic growth by now, consistently failed to materialize? How does your thesis account for why those visions of rapid growth and progress have fallen short?I'm not sure if my thesis explains why those expectations didn't materialize, but I'm hopeful that I do provide some framework for thinking about why we've often seen historically rapid growth spurts followed by stagnation and even decline. The story I'm telling is not rocket science, exactly. It's basically built on the simple intuitions that once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.So for example, the Soviet Union actually did reasonably well in terms of economic growth. A lot of it, or most of it, was centered on heavy industry, I should say. So people didn't necessarily see the benefits in their pockets, but the economy grew rapidly for about four decades or so, then growth petered out, and eventually it collapsed. So for exploiting mass-production technologies, the Soviet system worked reasonably well. Soviet bureaucrats could hold factory managers accountable by benchmarking performance across factories.But that became much harder when something new was needed because when something is new, what's the benchmark? How do you benchmark against that? And more broadly, when something is new, you need to explore, and you need to explore often different technological trajectories. So in the Soviet system, if you were an aircraft engineer and you wanted to develop your prototype, you could go to the red arm and ask for funding. If they turned you down, you maybe had two or three other options. If they turned you down, your idea would die with you.Conversely, in the US back in '99, Bessemer Venture declined to invest in Google, which seemed like a bad idea with the benefit of hindsight, but it also illustrates that Google was no safe bet at the time. Yahoo and Alta Vista we're dominating search. You need somebody to invest in order to know if something is going to catch on, and in a more decentralized system, you can have more people taking different bets and you can explore more technological trajectories. That is one of the reasons why the US ended up leading the computer revolutions to which Soviet contributions were basically none.Going back to your question, why didn't those dreams materialize? I think we've made it harder to explore. Part of the reason is protective regulation. Part of the reason is lobbying by incumbents. Part of the reason is, I think, a revolving door between institutions like the US patent office and incumbents where we see in the data that examiners tend to grant large firms some patents that are of low quality and then get lucrative jobs at those places. That's creating barriers to entry. That's not good for new startups and inventors entering the marketplace. I think that is one of the reasons that we haven't seen some of those dreams materialize.A history of Chinese innovation (8:26)So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided . . .I wonder if your analysis of pre-industrial China, if there's any lessons you can draw about modern China as far as the way in which bad governance can undermine innovation and progress?Pre-industrial China has a long history. China was the technology leader during the Song and Tang dynasties. It had a meritocratic civil service. It was building infrastructure on scales that were unimaginable in Europe at the time, and yet it didn't have an industrial revolution. So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided, and because there was lots of social status attached to becoming a bureaucrat and passing the civil service examination, if Galileo was born in China, he would probably become a bureaucrat rather than a scientist, and I think that's part of the reason too.But China mostly did well when the state was strong rather than weak. A strong state was underpinned by intensive political competition, and once China had unified and there were fewer peer competitors, you see that the center begins to fade. They struggle to tax local elites in order to keep the peace. People begin to erect monopolies in their local markets and collide with guilds to protect production and their crafts from competition.So during the Qing dynasty, China begins to decline, whereas we see the opposite happening in Europe. European fragmentation aids exploration and innovation, but it doesn't necessarily aid scaling, and so that is something that Europe needs to come to terms with at a later stage when the industrial revolution starts to take off. And even before that, market integration played an important role in terms of undermining the guilds in Europe, and so part of the reason why the guilds persist longer in China is the distance is so much longer between cities and so the guilds are less exposed to competition. In the end, Europe ends up overtaking China, in large part because vested interests are undercut by governments, but also because of investments in things that spur market integration.Global competitive intensity (11:41)Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.This is a great McKinsey kind of way of looking at the world: The notion that what drives innovation is sort of maximum competitive intensity. You were talking about the competitive intensity in both Europe and in China when it was not so centralized. You were talking about the competitive intensity of a fragmented Europe.Do you think that the current level of competitive intensity between the United States and China —and I really wish I could add Europe in there. Plenty of white papers, I know, have been written about Europe's competitive state and its in innovativeness, and I hope those white papers are helpful and someone reads them, but it seems to be that the real competition is between United States and China.Do you not think that that competitive intensity will sort of keep those countries progressing despite any of the barriers that might pop up and that you've already mentioned a little bit? Isn't that a more powerful tailwind than any of the headwinds that you've mentioned?It could be, I think, if people learn the right lessons from history, at least that's a key argument of the book. Right now, what I'm seeing is the United States moving more towards protectionist with protective tariffs. Right now, what I see is a move towards, we could even say crony capitalism with tariff exemptions that some larger firms that are better-connected to the president are able to navigate, but certainly not challengers. You're seeing the United States embracing things like golden shares in Intel, and perhaps even extending that to a range of companies. Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.And China today is having similar problems and on, I would argue, an even greater scale. Growth used to be the key objective in China, and so for local governments, provincial governments competing on such targets, it was fairly easy to benchmark and measure and hold provincial governors accountable, and they would be promoted inside the Communist Party based on meeting growth targets. Now, we have prioritized common prosperity, more national security-oriented concerns.And so in China, most progress has been driven by private firms and foreign-invested firms. State-owned enterprise has generally been a drag on innovation and productivity. What you're seeing, though, as China is shifting more towards political objectives, it's harder to mobilize private enterprise, where the yard sticks are market share and profitability, for political goals. That means that China is increasingly relying more again on state-owned enterprises, which, again, have been a drag on innovation.So, in principle, I agree with you that historically you did see Russian defeat to Napoleon leading to this Stein-Hardenberg Reforms, and the abolishment of Gilded restrictions, and a more competitive marketplace for both goods and ideas. You saw that Russian losses in the Crimean War led to the of abolition of serfdom, and so there are many times in history where defeat, in particular, led to striking reforms, but right now, the competition itself doesn't seem to lead to the kinds of reforms I would've hoped to see in response.Competitive problems in the US (15:50)I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior.I certainly wrote enough pieces and talked to enough people over the past decade who have been worried about competition in the United States, and the story went something like this: that you had these big tech companies — Google, and Meta, Facebook and Microsoft — that these were companies were what they would call “forever companies,” that they had such dominance in their core businesses, and they were throwing off so much cash that these were unbeatable companies, and this was going to be bad for America. People who made that argument just could not imagine how any other companies could threaten their dominance. And yet, at the time, I pointed out that it seemed to me that these companies were constantly in fear that they were one technological advance from being in trouble.And then lo and behold, that's exactly what happened. And while in AI, certainly, Google's super important, and Meta Facebook are super important, so are OpenAI, and so is Anthropic, and there are other companies.So the point here, after my little soliloquy, is can we overstate these problems, at least in the United States, when it seems like it is still possible to create a new technology that breaks the apparent stranglehold of these incumbents? Google search does not look quite as solid a business as it did in 2022.Can we overstate the competitive problems of the United States, or is what you're saying more forward-looking, that perhaps we overstated the competitive problems in the past, but now, due to these tariffs, and executives having to travel to the White House and give the president gifts, that that creates a stage for the kind of competitive problems that we should really worry about?I'm very happy to support the notion that technological changes can lead to unpredictable outcomes that incumbents may struggle to predict and respond to. Even if they predict it, they struggle to act upon it because doing so often undermines the existing business model.So if you take Google, where the transformer was actually conceived, the seven people behind it, I think, have since left the company. One of the reasons that they probably didn't launch anything like ChatGPT was probably for the fear of cannibalizing search. So I think the most important mechanisms for dislodging incumbents are dramatic shifts in technology.None of the legacy media companies ended up leading social media. None of the legacy retailers ended up leading e-commerce. None of the automobile leaders are leading in EVs. None of the bicycle companies, which all went into automobile, so many of them, ended up leading. So there is a pattern there.At the same time, I think you do have to worry that there are anti-competitive practices going on that makes it harder, and that are costly. The revolving door between the USPTO and companies is one example of that. We also have a reasonable amount of evidence on killer acquisitions whereby firms buy up a competitor just to shut it down. Those things are happening. I think you need to have tools that allow you to combat that, and I think more broadly, the United States has a long history of fairly vigorous antitrust policy. I think it'd be a hard pressed to suggest that that has been a tremendous drag on American business or American dynamism. So if you don't think, for example, that American antitrust policy has contributed to innovation and dynamism, at the very least, you can't really say either that it's been a huge drag on it.In Japan, for example, in its postwar history, antitrust was extremely lax. In the United States, it was very vigorous, and it was very vigorous throughout the computer revolution as well, which it wasn't at all in Japan. If you take the lawsuit against IBM, for example, you can debate this. To what extent did it force it to unbundle hardware and software, and would Microsoft been the company it is today without that? I think AT&T, it's both the breakup and it's deregulation, as well, but I think by basically all accounts, that was a good idea, particularly at the time when the National Science Foundation released ARPANET into the world.I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior. There's always a risk of antitrust being heavily politicized, and that's always been a bad idea, but at the same time, I think having tools on the books that allows you to check monopolies and steer their investments more towards the innovation rather than anti-competitive practices, I think is, broadly speaking, a good thing. I think in the European Union, you often hear that competition policy is a drag on productivity. I think it's the least of Europe's problem.Lagging European progress (22:19)If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. . . but doesn't do the same in digital. The question in my mind is: Why is that?Let's talk about Europe as we sort of finish up. We don't have to write How Progress Ends, it seems like progress has ended, so maybe we want to think about how progress restarts, and is the problem in Europe, is it institutions or is it the revealed preference of Europeans, that they're getting what they want? That they don't value progress and dynamism, that it is a cultural preference that is manifested in institutions? And if that's the case — you can tell me if that's not the case, I kind of feel like it might be the case — how do you restart progress in Europe since it seems to have already ended?The most puzzling thing to me is not that Europe is less dynamic than the United States — that's not very puzzling at all — but that it hasn't even managed to catch up in digital. If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. So in a way, take automobiles, electrical machinery, chemicals, pharmaceuticals, nobody would say that Europe is behind in those industries, or at least not for long. Europe has very robust catchup growth in the post-war period, but doesn't do the same in digital. The question in my mind is: Why is that?I think part of the reason is that the returns to innovation, the returns to scaling in Europe are relatively muted by a fragmented market in services, in particular. The IMF estimates that if you take all trade barriers on services inside the European Union and you add them up, it's something like 110 percent tariffs. Trump Liberation Day tariffs, essentially, imposed within European Union. That means that European firms in digital and in services don't have a harmonized market to scale into, the way the United States and China has. I think that's by far the biggest reason.On top of that, there are well-intentioned regulations like the GDPR that, by any account, has been a drag on innovation, and particularly been harmful for startups, whereas larger firms that find it easier to manage compliance costs have essentially managed to offset those costs by capturing a larger share of the market. I think the AI Act is going in the same direction there, ad so you have more hurdles, you have greater costs of innovating because of those regulatory barriers. And then the return to innovation is more capped by having a smaller, fragmented market.I don't think that culture or European lust for leisure rather than work is the key reason. I think there's some of that, but if you look at the most dynamic places in Europe, it tends to be the Scandinavian countries and, being from Sweden myself, I can tell you that most people you will encounter there are not workaholics.AI & labor (25:46)I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin.As I finish up, let me ask you: Like a lot of economists who think about technology, you've thought about how AI will affect jobs — given what we've seen in the past few years, would it be your guess that, if we were to look at the labor force participation rates of the United States and other rich countries 10 years from now, that we will look at those employment numbers and think, “Wow, we can really see the impact of AI on those numbers”? Will it be extraordinarily evident, or would it be not as much?Unless there's very significant progress in AI, I don't think so. I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin. So in most activities where the world is changing, and the world is changing every day, you can't really rely on AI to reliably do work for you.An example of that, most people know of AlphaGo beating the world champion back in 2016. Few people will know that, back in 2023, human amateurs, using standard laptops, exposing the best Go programs to new positions that they would not have encountered in training, actually beat the best Go programs quite easily. So even in a domain where basically the problem is solved, where we already achieved super-human intelligence, you cannot really know how well these tools perform when circumstances change, and I think that that's really a problem. So unless we solve that, I don't think it's going to have an impact that will mean that labor force participation is going to be significantly lower 10 years from now.That said, I do think it's going to have a very significant impact on white collar work, and people's income and sense of status. I think of generative AI, in particular, as a tool that reduces barriers to entry in professional services. I often compare it to what happened with Uber and taxi services. With the arrival of GPS technology, knowing the name of every street in New York City was no longer a particularly valuable skill, and then with a platform matching supply and demand, anybody could essentially get into their car who has a driver's license and top up their incomes on the side. As a result of that, incumbent drivers faced more competition, they took a pay cut of around 10 percent.Obviously, a key difference with professional services is that they're traded. So I think it's very likely that, as generative AI reduces the productivity differential between people in, let's say the US and the Philippines in financial modeling, in paralegal work, in accounting, in a host of professional services, more of those activities will shift abroad, and I think many knowledge workers that had envisioned prosperous careers may feel a sense of loss of status and income as a consequence, and I do think that's quite significant.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Lenny's Podcast: Product | Growth | Career
How to find hidden growth opportunities in your product | Albert Cheng (Duolingo, Grammarly, Chess.com)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Oct 5, 2025 85:25


Albert Cheng has led growth at three of the world's most successful consumer subscription companies: Duolingo, Grammarly, and Chess.com. A former Google product manager (and serious pianist!), Albert developed a unique approach to finding and scaling growth opportunities through rapid experimentation and deep user psychology. His teams run 1,000 experiments a year, discovering counterintuitive insights that have driven tens of millions in revenue.What you'll learn:1. How to use the explore-exploit framework to find new growth opportunities2. How showing premium features to free users doubled Grammarly's upgrades to paid plans3. What good retention looks like for a consumer subscription app4. Why resurrected users drive 80% of mature product growth5. Why “reverse trials” work better than time-based trials6. The three pillars of successful gamification: core loop, metagame, and profile —Brought to you by:Vanta—Automate compliance. Simplify security.Jira Product Discovery—Confidence to build the right thingMiro—A collaborative visual platform where your best work comes to life—Where to find Albert Cheng:• X: https://x.com/albertc248• LinkedIn: https://www.linkedin.com/in/albertcheng1/• Chess.com: https://www.chess.com/member/Goniners—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Referenced:• How Duolingo reignited user growth: https://www.lennysnewsletter.com/p/how-duolingo-reignited-user-growth• Inside ChatGPT: The fastest-growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• Explore vs. Exploit: https://brianbalfour.com/quick-takes/explore-vs-exploit• Grammarly: https://www.grammarly.com/• Reforge: https://www.reforge.com/• Chess.com: https://www.chess.com/• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder & CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Figma: https://www.figma.com/• Cursor: https://cursor.com/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Claude Code: https://www.anthropic.com/claude-code• GitHub Copilot: https://github.com/features/copilot• Noam Lovinsky on LinkedIn: https://www.linkedin.com/in/noaml/• The happiness and pain of product management | Noam Lovinsky (Grammarly, Facebook, YouTube, Thumbtack): https://www.lennysnewsletter.com/p/the-happiness-and-pain-of-product• Kyla Siedband on LinkedIn: https://www.linkedin.com/in/kylasiedband/• The Duolingo handbook: https://blog.duolingo.com/handbook/• Lenny's post on X about the Duolingo handbook: https://x.com/lennysan/status/1889008405584683091• The rituals of great teams | Shishir Mehrotra of Coda, YouTube, Microsoft: https://www.lennysnewsletter.com/p/the-rituals-of-great-teams-shishir• Duolingo on TikTok: https://www.tiktok.com/@duolingo• Kasparov vs. Deep Blue | The Match That Changed History: https://www.chess.com/article/view/deep-blue-kasparov-chess• Magnus Carlsen: https://en.wikipedia.org/wiki/Magnus_Carlsen• Elo rating system: https://www.chess.com/terms/elo-rating-chess• Stockfish: https://en.wikipedia.org/wiki/Stockfish_(chess)• AlphaGo on Prime Video: https://www.primevideo.com/detail/AlphaGo/0KNQHKKDAOE8OCYKQS9WSSDYN0• Statsig: https://www.statsig.com/• The State of Product in 2026: Navigating Change, Challenge, and Opportunity: https://www.atlassian.com/blog/announcements/state-of-product-2026• Erik Allebest on LinkedIn: https://www.linkedin.com/in/erikallebest/• Daniel Rensch on X: https://x.com/danielrensch• Chariot: https://en.wikipedia.org/wiki/Chariot_(company)• San Francisco 49ers: https://www.49ers.com/• Breville Barista Express: https://www.breville.com/en-us/product/bes870—Recommended books:• Snuggle Puppy!: A Little Love Song: https://www.amazon.com/Snuggle-Puppy-Little-Boynton-Board/dp/1665924985• Ogilvy on Advertising: https://www.amazon.com/Ogilvy-Advertising-David/dp/039472903X• Dark Squares: How Chess Saved My Life: https://www.amazon.com/Dark-Squares-Chess-Saved-Life/dp/1541703286—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Emergent Behavior
Dopamine Optimization vs Human Flourishing

Emergent Behavior

Play Episode Listen Later Oct 1, 2025 89:22


Explore game development philosophy and AI's evolving impact through Factorio creator Michal Kovařík's insights on AlphaGo's transformation of Go, current programming limitations, and the future of human-AI collaboration. Bio: Michal Kovařík is a Czech game developer best known as the co-founder and creative head of Wube Software, the studio behind the global indie hit Factorio. Under his online alias “kovarex,” Kovařík began the Factorio project in 2012 with a vision to blend his favorite game elements – trains, base-building, logistics, and automation – into a new kind of construction & management simulation. Initially funded via a modest Indiegogo campaign, Factorio blossomed from a garage project into one of Steam's top-rated games, praised for its deep automation gameplay and technical excellence. Kovařík guided Factorio through an 8-year development in open alpha/early access, cultivating a passionate player community through regular “Friday Facts” blog updates. By 2024, Factorio had sold over 4 million copies worldwide, all without ever going on sale.Michal now leads a team of ~30 in Prague, renowned for their principled business approach (no discounts, no DRM) and fan-centric development style, and he's just launched Factorio's Space Age expansion. FOLLOW ON X: @8teAPi (Ate) @steveruizok (Michal) @TurpentineMedia -- LINKS: Factorio https://www.factorio.com/ -- TIMESTAMPS: (00:00) Introduction and Factorio Discussion (07:36) AlphaGo's Impact on Go and AI Perception (18:56) Factorio's Origin Story and Team Development (30:13) AI's Current Programming Limitations (44:50) Future Predictions for AI Programming (48:31) Societal Concerns: Resource Curse and Human Value (55:21) Privacy, Surveillance, and Training Data (1:01:22) AI Alignment and Asimov's Robot Laws (1:10:00) Social Media as Proto-AI and Dopamine Manipulation (1:20:00) Programming Human Preferences and Goal Modification (1:26:00) Historical Perspective and Conclusion

Vital Signs
Ep 61: Co-Founder of Chai Discovery Joshua Meier on 99% Faster Drug Discovery, BioTech's AlphaGo Moment, Building Photoshop for Molecules

Vital Signs

Play Episode Listen Later Sep 22, 2025 56:45


In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago.  The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It's an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine. Check out the full Chai-2 Zero-Shot Antibody report linked here: https://www.biorxiv.org/content/10.1101/2025.07.05.663018v1.full.pdf (0:00) Intro(1:25) The Evolution of AI in Drug Discovery(5:14) Current State and Future of AI in Biotech(10:08) Challenges and Modalities in Therapeutics(14:44) Data Generation and Model Training(22:52) Open Source and Model Development at Chai(29:52) Open Source Models and Their Impact(34:36) How Should Chai-2 Be Used?(38:53) The Future of AI in Pharma and Biotech(42:46) Key Milestones and Metrics in AI-Driven Drug Discovery(47:20) Critiques and Hesitation(54:01) Quickfire Out-Of-Pocket: https://www.outofpocket.health/

Device Nation
Dr. Adam Brekke and Peter Verillo talk AI in Orthopedics....Promise, or Peril?

Device Nation

Play Episode Listen Later Aug 28, 2025 80:57


Send us a textEpisode two in a Device Nation series examining what AI is bringing our way in the operative Orthopedic space!First up on deck is Peter Verillo, CEO of Redefine Surgery:  "Redefine Surgery is developing the Mentor Vision System, an AI platform that helps surgeons implant instruments and devices with speed and precision. By building the world's richest dataset of expert surgeons, we're enabling advanced surgical guidance and creating an AI mentor capable of delivering top 1% clinical outcomes across specialties."Our closer is Duke Health Orthopedic Surgeon Dr. Adam Brekke, as we talk about Texas BBQ, enabling tech, "move 37", synthetic biology and so much more!!"AI will not replace Doctors, but will instead augment them, enabling physicians to practice better medicine with greater accuracy and increased efficiency." Benjamin BellRedefine Surgery: https://www.redefinesurgery.com/Dr. Brekke Clinic: https://www.dukehealth.org/find-doctors-physicians/adam-brekke-mdMove 37: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_SedolNexus Book: https://a.co/d/ey2EbJ6The Coming Wave Book: https://a.co/d/71CidAlSupport the show

LatamlistEspresso
AgendaPro raises $35M, Ep 215

LatamlistEspresso

Play Episode Listen Later Aug 26, 2025 2:56


This week's Espresso covers news from IORQ, Monato, Creditop, and more!Outline of this episode:[00:30] – AgendaPro raises $35M from Riverwood Capital[00:44] – Digitt raises $10M Series A led by Yolo Investments[00:53] – Kira raises $6.7M seed round led by Blockchange Ventures[01:05] – Rappi secures $100M debt financing, its largest credit deal to date[01:16] – Asaas raises $18.5M FIDC[01:28] – Iniciador raises $6M to expand Pix modalities[01:38] – AlphaGo raises $2M pre-seed round[01:53] – Nuvia acquires DataSaga.aiResources & people mentioned:Startups: AgendaPro, Digitt, Kira, Rappi, Asaas, Iniciador, AlphaGo,VCs: Riverwood Capital, Yolo Investments, Blockchange Ventures, Kirkoswald Private Credit, Banco Santander, Itaú BBA, Valor Capital, Black Mamba Holding, Nuvia, DataSaga.ai

Unsupervised Learning
Ep 72: Co-Founder of Chai Discovery Joshua Meier on 99% Faster Drug Discovery, BioTech's AlphaGo Moment, Building Photoshop for Molecules

Unsupervised Learning

Play Episode Listen Later Aug 13, 2025 57:15


In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago.  The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It's an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine. Check out the full Chai-2 Zero-Shot Antibody report linked here: https://www.biorxiv.org/content/10.1101/2025.07.05.663018v1.full.pdf [0:00] Intro[2:10] The Evolution of AI in Drug Discovery[6:09] Current State and Future of AI in Biotech[11:15] Challenges and Modalities in Therapeutics[15:19] Data Generation and Model Training[23:59] Open Source and Model Development at Chai[28:35] Protein Structure Prediction and Diffusion Models[30:57] Open Source Models and Their Impact[35:41] How Should Chai-2 Be Used?[39:34] The Future of AI in Pharma and Biotech[43:51] Key Milestones and Metrics in AI-Driven Drug Discovery[48:24] Critiques and Hesitation[55:06] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Faster, Please! — The Podcast
✨ AI and the future of R&D: My chat (+transcript) with McKinsey's Michael Chui

Faster, Please! — The Podcast

Play Episode Listen Later Jul 31, 2025 23:10


My fellow pro-growth/progress/abundance Up Wingers,The innovation landscape is facing a difficult paradox: Even as R&D investment has increased, productivity per dollar invested is in decline. In his recent co-authored paper, The next innovation revolution—powered by AI, Michael Chui explores AI as a possible solution to this dilemma.Today on Faster, Please! — The Podcast, Chui and I explore the vast potential for AI-augmented research and the challenges and opportunities that come with applying it to the real-world.Chui is a senior fellow at QuantumBlack, McKinsey's AI unit, where he leads McKinsey research in AI, automation, and the future of work.In This Episode* The R&D productivity problem (01:21)* The AI solution (6:13)* The business-adoption bottleneck (11:55)* The man-machine team (18:06)* Are we ready? (19:33)Below is a lightly edited transcript of our conversation. The R&D productivity problem (01:21)All the easy stuff, we already figured out. So the low-hanging fruit has been picked, things are getting harder and harder.Pethokoukis: Do we understand what explains this phenomenon where we seem to be doing lots of science, and we're spending lots of money on R&D, but the actual productivity of that R&D is declining? Do we have a good explanation for that?I don't know if we have just one good explanation. The folks that we both know have been both working on what are the causes of this, as well as what are some of the potential solutions, but I think it's a bit of a hidden problem. I don't think everyone understands that there are a set of people who have looked at this — quite notably Nick Bloom at Stanford who published this somewhat famous paper that some people are familiar with. But it is surprising in some sense.At one level, it's amazing what science and engineering has been able to do. We continue to see these incredible advances, whether it's in AI, or biotechnology, or whatever; but also, what Nick and other researchers have discovered is that we are producing less for every dollar we spend in R&D. That's this little bit of a paradox, or this challenge, that we see. What some of the research we've been trying to do is understand, can AI try to contribute to bending those curves?. . . I'm a computer scientist by training. I love this idea of Moore's Law: Every couple of years you can double the number of transistors you can put on a chip, or whatever, for the same amount of money. There's something called “Eroom's Law,” which is Moore spelled backwards, and basically it said: For decades in the pharmaceutical industry, the number of compounds or drugs you would produce for every billion dollars of R&D would get cut in half every nine years. That's obviously moving in the wrong direction. That challenge, I don't think everyone is aware of, but one that we need to address.I suppose, in a way, it does make sense that as we tackle harder problems, and we climb the tree of knowledge, that it's going to take more time, maybe more researchers, the researchers themselves may have to spend more time in school, so it may be a bit of a hidden problem, but it makes some intuitive sense to me.I think there's a way to think about it that way, which is: All the easy stuff, we already figured out. So the low-hanging fruit has been picked, things are getting harder and harder. It's amazing. You could look at some of the early papers in any field and it have a handful of authors, right? The DNA paper, three authors — although it probably should have included Rosalyn Franklin . . . Now you look at a physics paper or a computer science paper — the author list just goes on sometimes for pages. These problems are harder. They require more and more effort, whether it's people's talents, or whether it's computing power, or large-scale experiments, things are getting harder to do. I think there's ways in which that makes sense. Are there other ways in which we could improve processes? Probably, too.We could invest more in research, make it more efficient, and encourage more people to become researchers. To me, what's more exciting than automating different customer service processes is accelerating scientific discovery. I think that's what makes AI so compelling.That is exactly right. Now, by the way, I think we need to continue to invest in basic research and in science and engineering, I think that's absolutely important, but —That's worth noting, because I'm not sure everybody thinks that, so I'm glad you highlighted that.I don't think AI means that everything becomes cheaper and we don't need to invest in both human talent as well as in research. That's number one.Number two, as you said, we spend a lot of time, and appropriately so, talking about how AI can improve productivity, make things more efficient, do the things that we do already cheaper and faster. I think that's absolutely true. But we had the opportunity to look over history, and what has actually improved the human condition, what has been one of the things that has been necessary to improve the human condition over decades, and centuries, and millennia, is, in fact, discovering new ideas, having scientific breakthroughs, turning those scientific breakthroughs into engineering that turn into products and services, that do everything from expand our lifespans to be able to provide us with food, more energy. All those sorts of things require innovation, require R&D, and what we've discovered is the potential for AI, not only to make things more efficient, but to produce more innovation, more ideas that hopefully will lead to breakthroughs that help us all.The AI solution (6:13)I think that's one of the other potentials of using AI, that it could both absorb some of the experience that people have, as well as stretch the bounds of what might be possible.I've heard described as an “IMI,” it's an invention that makes more invention. It's an invention of a method of invention. That sounds great — how's it going to do that?There are a couple of ways. We looked at three different channels through which AI could improve this process of innovation and R&D. The first one is just increasing the volume, velocity, and variety of different candidates. One way you could think about innovation is you create a whole bunch of candidates and then you filter them down to the ones that might be most effective. Number one, you can just fill that funnel faster, better, and with greater variety. That's number one.The candidates could be a molecule, it could be a drug, it could be a new alloy, it could be lots of things.Absolutely, or a design for a physical product. One of the interesting things is, this quote-unquote “modern AI” — AI's been around for 70 years — is based on foundation models, these large artificial neural networks trained on huge amounts of data, and they produce unstructured outputs. In many cases, language, we talk about LLMs.The interesting thing is, you can train these foundation models not just to generate language, but you can generate a protein, or a drug candidate, as you were saying. You can imagine the prompt being, “Please produce 10 drug candidates that address this condition, but without the following side effects.” That's not exactly how it works, but roughly speaking, that's the potential to generate these things, or generate an electrical circuit, or a design for an air foil or an airframe that has these characteristics. Being able to just generate those.The interesting thing is, not only can you generate them faster, but there's this idea that you can create more variety. We're usefully proud as humans about our creativity, but also, that judgment or that training that we have, that experience sometimes constrains it. The famous example was some folks created this machine called AlphaGo which was meant to compete against the world champion in this game called Go, a very complex strategic game. Famously, it beat the world champion, but one of the things it did is this famous Move 37, this move that everyone who was an expert at Go said, “That is nuts. Why would you possibly do that?” Because the machine was a little bit more unconstrained, actually came up with what you might describe as a creative idea. I think that's one of the other potentials of using AI, that it could both absorb some of the experience that people have, as well as stretch the bounds of what might be possible.So you come up with the design, and then a variety of options, and then AI can help model and test them.Exactly. So you generate a broader and more voluminous set of potential designs, candidates, whether it's molecules, or chemicals, or what have you. Now you need to narrow that down. Traditionally you would narrow it down either one, through physical testing — so put something into a wind tunnel or run it through the water if you're looking at a boat design, or something like that, or put it in an electromagnetic chamber and see how the antenna operates. You'd either test it physically, and then, of course, lots of people figured out how to use physics, mathematical equations, in order to create “digital twins.” So you have these long acronyms like CFD for computational fluid dynamics, basically a virtual wind tunnel, or what have you. Or you have finite element analysis, another way to model how a structure might perform, or computational electromagnetic modeling. All these ways that you can use physics to simulate things, and that's been terrific.But some of those models actually take hours, sometimes days, to run these models. It might be faster than building the physical prototype and then modeling it — again, sometimes you just wait until something breaks, you're doing failure testing. Then you could do that in a computer using these models. But sometimes they take a really long time, and one of the really interesting discoveries in “AI” is you can use that same neural network that we've used to simulate cognition or intelligence, but now you use it to simulate physical systems. So in some ways it's not AI, because you're not creating an artificial intelligence, you're creating an artificial wind tunnel. It's just a different way to model physics. Sometimes these problems get even more complicated . . . If you're trying to put an antenna on an airplane, you need to know how the airflow is going to go over it, but you need to know whether or not the radio frequency stuff works out too, all that RF stuff.So these multiphysics models, the complexity is even higher, and you can train these neural nets . . . even faster than these physics-based models. So we have these things called AI surrogate models. They're sort of surrogates. It's two steps removed, in some ways, from actual physical testing . . . Literally we've seen models that can run in minutes rather than hours, or an hour rather than a few days. That can accelerate things. We see this in weather forecasting in a number of different ways in which this can happen. If you can generate more candidates and then test them faster, you can imagine the whole R&D process really accelerating.The business-adoption bottleneck (11:55)We know that companies are using AI surrogates, deep learning surrogates, already, but is it being applied as many places as possible? No, it isn't.Does achieving your estimated productivity increases depend more on further technological advances or does it depend more on how companies adopt and implement the technology? Is the bottleneck still in the tech itself, or is it more about business adaptation?Mostly number two. The technology is going to continue to advance. As a technologist, I love all that stuff, but as usual, a lot of the challenges here are organizational challenges. We know that companies are using AI surrogates, deep learning surrogates, already, but is it being applied as many places as possible? No, it isn't. A lot of these things are organizational. Does it match your strategy, for instance? Do you have the right talent and organization in place?Let me just give one very specific example. In a lot of R&D organizations we know, there's a separate organization for physical testing and a separate organization for simulations. Simulation, in many cases, us physics-based, but you add these deep-learning surrogates as well. That doesn't make sense at some level. I'm not saying physical testing goes away, but you need to figure out when you should physically test, when you should use which simulation methods, when you should use deep-learning surrogates or AI techniques, et cetera, and that's just one organizational difference that you could make if you were in an organization that was actually taking this whole testing regime seriously, where you're actually parsing out when the optimal amount of physical testing is versus simulation, et cetera. There's a number of things where that's true.Even before AI, historically, there was a gap between novel, new technologies, what they can do in lab settings, and then how they're applied in real-world research or in business environments. That gap, I would guess, probably requires companies to rewire how they operate, which takes time.It is indeed, and it's funny that you use the word “rewiring.” My colleagues wrote a book entitled Rewired, which literally is about the different ways, together, that you need to, as you say, rewire or change the way an organization operates. Only one of those six chapters is around the tech stack. It's still absolutely important. You've got to get all that stuff right. But it is mostly all of the other things surrounding how you change and what organization operates in order to bring the full value of this together to reach scale.We also talk about pilot purgatory: “We did this cool experiment . . .” but when is it good enough that the CFOs talks about it at the quarterly earnings report? That requires the organization to change the way it operates. That's the learning we've seen all the time.We've been serving thousands of executives on their use of AI for seven years now. Nearly 80 percent of organizations say they're regularly using AI someplace in the business, but in a separate survey, only one percent say they're mature in that usage. There's this giant gap between just using AI and then actually having the value be created. And by the way, organizations that are creating that value are accelerating their performance difference. If you have a much more productive R&D organization that churns out products that are successful in the market, you're going to be ahead of your competitors, and that's what we're seeing too.Is there a specific problem that comes up over and over again with companies, either in their implementation of AI, maybe they don't trust it, they may not know how to use it? What do you think is the problem?Unfortunately, I don't think there's just one thing. My colleagues who do this work on Rewired, for instance — you kind of have to do all those things. You do have to have the right talent and organization in place. You have to figure out scaling, for instance. You have to figure out change management. All of those things together are what underpins outsized performance, so all those things have to be done.So if companies are successful, what is the productivity impact you see? We're talking about basically the current technology level, give or take. We're not talking about human-level AI, superintelligence, we're talking about AI more or less as it exists today. Everybody wants to accelerate productivity: governments around the world, companies. So give me a feel for that.There are different measures of productivity, but here what we're talking about is basically: How many new products, successful products, can you put out in the market? Our modeling says, depending on your industry, you could double your productivity, in other words, of R&D. In other words, you could put out double the amount of products and services — new products and services — that you have been previously.Now, that's not true for every industry. By the way, the impact of that is different for different industries because for some industries you are dependent — In pharmaceuticals, the majority of your value comes from producing new products and services over time because eventually the patent runs out or whatever. There are other industries, we talk about science-based industries like chemicals, for instance. The new-product development process in chemicals is very, very close to the science of chemistry. So these levers that I just talked about — producing more candidates, being able to evaluate them more quickly, and all the other things that LLMs can do, in general, we could see potential doubling in the pace of which innovation happens.On the other hand, the chemicals industry — let's leave out specialty chemicals, but the commodity chemicals — they'll still produce ethylene, right? So to a certain extent, while the R&D process can be accelerated a great deal, the EBIT [Earnings Before Interest and Taxes] impact on the industry might be lower than it is for pharmaceuticals, for instance. But still, it's valuable. And then, again, if you're in specialty chem, it means a lot to you. So depending on where you sit in your position in the market, it can vary, but the potential is really high.The man-machine team (18:06)At least for the medium term, we're not going to be able to get rid of all the people. The people are going to be absolutely important to the process.Will future R&D look more like researchers augmented by AI or AI systems assisted by researchers? Who's the assistant in this equation? Who's working for who?It's “all of the above” and it depends on how you decide to use these technologies, but we even write in our paper that we need to be thoughtful about where you put the human in the loop. Every study, the conditions matter, but there are lots of studies where you say, look, the combination of machines and humans — so AI and researchers — is the most powerful combination. Each brings their respective strengths to it, but the funny thing is that sometimes the human biases actually decrease the performance of the overall system, and so, oh, maybe we should just go with machines. At least for the medium term, we're not going to be able to get rid of all the people. The people are going to be absolutely important to the process.When is it that people either are necessary to the process or can be helpful? In many cases, it is around things like, when is it that you need to make a decision that's a safety-critical decision, a regulatory decision where you just have to have a person look at it? That's the sort of necessity argument for people in the loop. But also, there are things that machines just don't do well enough yet, and there's a little bit of that.Are we ready? (19:33). . . AI is one of those things that can produce potentially more of those ideas that can underpin, hopefully, an improved quality of life for us and our children.If we can get more productive R&D, and then businesses get better at incorporating this into their processes and they could potentially generate more products and services, do we have a government ready for that world of accelerated R&D? Can we handle that flow? My bias says probably not, but please correct me if I'm wrong.I think one of the interesting things is people talk about AI regulation. In many of these industries, the regulations already exist. We have regulations for what goes out in pharmaceuticals, for instance. We have regulations in the aviation industry, we have regulations in the automobile industry, and in many ways, AI in the R&D process doesn't change that — maybe it should, people talk about, can you actually accelerate the process of approving a drug, for instance, but that wasn't the thing that we studied. In some ways, those processes are applied now, already, so that's something that doesn't necessarily have to changeThat said, are some of these potential innovations gated by approval processes or clinical trials processes? Absolutely. In some of those cases, the clinical trials process gait is not necessarily a regulation, but we know there's a big problem just finding enough potential subjects in order to do clinical trials. That's not a regulatory problem, that's a problem of finding people who are good candidates for actually testing these drugs.So yes, in some cases, even if we were able to double the amount of candidates that can go through the funnel on a number of these things, there will be these exogenous issues that would constrain society's ability to bring these to market. So that just says, you squeeze the balloon here and it opens up there, but let's go solve each of these problems, and one of the problems that we said that AI can help solve is increasing the number of things that you could potentially put into market if it can get past the other necessities.For a general public where so much of what they're hearing about AI tends to be about job loss, or are they stealing copyrighted material, or, yeah, people talk about these huge advances, but they're not seeing them yet. What is your elevator optimistic pitch why you may be worried about the impact of AI, but here's why I'm excited about it? Why are you excited by it?By the way, I think all those things are really important. All of those concerns, and how do we reskill the workforce, all those things, and we've done work on that as well. But the thing that I'm excited about is we need innovation, we need new ideas, we need scientific advancements, and engineering that turns them into products in order for us to improve their human condition, whether it's living longer lives, or living higher quality life, whether it's having the energy, whether it's to be able to support that in a way that doesn't cause other problems. All of those things, we need to have them, and what we've discovered is AI is one of those things that can produce potentially more of those ideas that can underpin, hopefully, an improved quality of life for us and our children.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* The Tariffs Kicked In. The Sky Didn't Fall. Were the Economists Wrong? - NYT Opinion* AI Disruption Is Coming for These 7 Jobs, Microsoft Says - Barron's* One Way to Ease the US Debt Crisis? Productivity - Bberg Opinion* So far, only one-third of Americans have ever used AI for work - Ars▶ Business* Meta and Microsoft Keep Their License to Spend - WSJ* Meta Pivots on AI Under the Cover of a Superb Quarter - Bberg Opinion* Will Mark Zuckerberg's secret, multibillion-dollar AI plan win over Wall Street? - FT* The AI Company Capitalizing on Our Obsession With Excel - WSJ* $15 billion in NIH funding frozen, then thawed Tuesday in ongoing power war - Ars* Mark Zuckerberg promises you can trust him with superintelligent AI - The Verge* AI Finance App Ramp Is Valued at $22.5 Billion in Funding Round - WSJ▶ Policy/Politics* Trump's Tariff Authority Is Tested in Court as Deadline on Trade Deals Looms - WSJ* China is betting on a real-world use of AI to challenge U.S. control - Wapo▶ AI/Digital* ‘Superintelligence' Will Create a New Era of Empowerment, Mark Zuckerberg Says - NYT* How Exposed Are UK Jobs to Generative AI? Developing and Applying a Novel Task-Based Index - Arxiv* Mark Zuckerberg Details Meta's Plan for Self-Improving, Superintelligent AI - Wired* A Catholic AI app promises answers for the faithful. Can it succeed? - Wapo* Power Hungry: How Ai Will Drive Energy Demand - SSRN* The two people shaping the future of OpenAI's research - MIT* Task-based returns to generative AI: Evidence from a central bank - CEPR▶ Biotech/Health* How to detect consciousness in people, animals and maybe even AI - Nature* Why living in a volatile age may make our brains truly innovative - NS▶ Clean Energy/Climate* The US must return to its roots as a nation of doers - FT* How Trump Rocked EV Charging Startups - Heatmap* Countries Promise Trump to Buy U.S. Gas, and Leave the Details for Later - NYT* Startup begins work on novel US fusion power plant. Yes, fusion. - E&E* Scientists Say New Government Climate Report Twists Their Work - Wired▶ Robotics/Drones/AVs* The grand challenges of learning medical robot autonomy - Science* Coal-Powered AI Robots Are a Dirty Fantasy - Bberg Opinion▶ Up Wing/Down Wing* A Revolutionary Reflection - WSJ Opinion* Why Did the Two Koreas Diverge? - SSRN* The best new science fiction books of August 2025 - NS* As measles spreads, old vaccination canards do too - FT Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Before AGI
Jakub Pachocki and Szymon Sidor: Building AI

Before AGI

Play Episode Listen Later Jul 31, 2025 54:33


Artificial intelligence is rapidly transforming the world, raising urgent questions about its impact, governance, and the future of human-machine collaboration. As AI systems become more capable, society faces challenges around safety and the balance of power. What does it mean to build and deploy technology that can reason, create, and potentially automate research itself? How do leading researchers navigate the technical and ethical frontiers of this new era?Jakub Pachocki, Chief Scientist at OpenAI, and Szymon Sidor, Technical Fellow at OpenAI, share their journeys from early programming competitions in Poland to shaping some of the most advanced AI systems in the world. They discuss the evolution of AI research, the technical and emotional challenges of building breakthrough models, and the profound societal questions that come with unprecedented progress.2:42 - Origin story: high school to OpenAI6:31 - “AI enlightenment” and AlphaGo moment17:12 - Early OpenAI culture and impostor syndrome23:30 - Power duo dynamic and collaboration27:25 - Shift to reasoning models36:23 - Possibilities of AGI42:12 - OpenAI's pandemic efforts showed AI's immaturity51:15 - Governance lessons from crisis55:39 - AI safety and optimism for the future

Subliminal Jihad
[#256] PROPHETS OF (p)DOOM, Part One: Rationalism and AI Psychosis feat. Vincent Lê

Subliminal Jihad

Play Episode Listen Later Jul 30, 2025 113:30


Dimitri and Khalid speak with academic and Substack writer Vincent Lê about the current fevered dystopian landscape of AI, including: the Silicon Valley philosophy of "Rationalism", the Zizian cult, the qualitative difference between LLMs and self-training AIs like AlphaGo and DeepMind, AlphaGo mastering the ancient Chinese game Go, Scott Boorman's 1969 book "Protracted Game: A Wei-ch'i Interpretation of Maoist Revolutionary Strategy", Capital as the first true AGI system, the Bolshevik Revolution as the greatest attempt to build a friendly alternative AGI, and more...part one of two. Vincent's Substack: https://vincentl3.substack.com

VP Land
We tested JSON prompting in Veo 3

VP Land

Play Episode Listen Later Jul 29, 2025 31:10 Transcription Available


Is JSON prompting a useful technique or just influencer trend? In this episode, we examine the heated debate around structured prompts in Veo 3, test the claims ourselves, and share the results. Plus, we dive into Higgsfield Steal's controversial marketing approach and explore AlphaGo, the AI system designed to build other AI models that could accelerate the path to artificial superintelligence.--The views and opinions expressed in this podcast are the personal views of the hosts and do not necessarily reflect the views or positions of their respective employers or organizations. This show is independently produced by VP Land without the use of any outside company resources, confidential information, or affiliations.

Vetandets värld
Skapade AI-modellen som överlistade människan – här är Hassabis väg till Nobelpriset

Vetandets värld

Play Episode Listen Later Jul 22, 2025 19:34


2016 höll världen andan när AI-modellen AlphaGo utmanade världsmästaren i spelet Go och vann. 2024 belönades Demis Hassabis, hjärnan bakom modellen, med Nobelpris för en helt annan upptäckt. Lyssna på alla avsnitt i Sveriges Radio Play. Programmet sändes första gången 5/12-2024.Bara åtta år gammal köper Demis Hassabis sin första dator för vinstpengarna från en schackturnering. Som vuxen utvecklar han det första datorsystemet som lyckas överlista en mänsklig världsmästare i ett mer avancerat spel än schack. Vetenskapsradion träffar Demis Hassabis, en av Nobelpristagarna i kemi 2024, i ett personligt samtal – om vägen från schacknörd till Google-elit och Nobelpris.Reporter: Annika Östman annika.ostman@sr.se Producent: Lars Broström lars.brostrom@sr.se

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Meet AlphaEvolve: The Autonomous Agent That Discovers Algorithms Better Than Humans With Google DeepMind's Pushmeet Kohli and Matej Balog

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Play Episode Listen Later Jun 26, 2025 42:08


Much of the scientific process involves searching. But rather than continue to rely on the luck of discovery, Google DeepMind has engineered a more efficient AI agent that mines complex spaces to facilitate scientific breakthroughs. Sarah Guo speaks with Pushmeet Kohli, VP of Science and Strategic Initiatives, and research scientist Matej Balog at Google DeepMind about AlphaEvolve, an autonomous coding agent they developed that finds new algorithms through evolutionary search. Pushmeet and Matej talk about how AlphaEvolve tackles the problem of matrix multiplication efficiency, scaling and iteration in problem solving, and whether or not this means we are at self-improving AI. Together, they also explore the implications AlphaEvolve has to other sciences beyond mathematics and computer science. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @pushmeet | @matejbalog Chapters: 00:00 Pushmeet Kohli and Matej Balog Introduction 0:48 Origin of AlphaEvolve 02:31 AlphaEvolve's Progression from AlphaGo and AlphaTensor 08:02 The Open Problem of Matrix Multiplication Efficiency 11:18 How AlphaEvolve Evolves Code 14:43 Scaling and Predicting Iterations 16:52 Implications for Coding Agents 19:42 Overcoming Limits of Automated Evaluators 25:21 Are We At Self-Improving AI? 28:10 Effects on Scientific Discovery and Mathematics 31:50 Role of Human Scientists with AlphaEvolve 38:30 Making AlphaEvolve Broadly Accessible 40:18 Applying AlphaEvolve Within Google 41:39 Conclusion

Korean. American. Podcast
Episode 99: The Match Review (Media)

Korean. American. Podcast

Play Episode Listen Later May 29, 2025 119:00


This week Jun and Daniel review the popular Korean film "The Match" (승부), which tells the story of two legendary Go players in Korea during the late 1980s and early 1990s. Our hosts explore the cultural significance of Go in Korean society, discussing how it was once one of the four major activities Korean children would pursue alongside math academies, taekwondo, and piano. They delve into the controversy surrounding the film's star Yoo Ah-in and his drug scandal, examining Korea's strict cancellation culture and how it differs between actors, K-pop stars, and politicians. The conversation expands to cover the historic AlphaGo vs. Lee Sedol match in 2016 and its symbolic impact on Korean society's understanding of AI. Through scene-by-scene analysis, they highlight cultural details from 1980s Korea including car parades for international achievements, traditional family hierarchies, smoking culture, and nostalgic elements like fumigation trucks and Nikon cameras as status symbols.If you're interested in learning about the cultural significance of Go in East Asian societies, understanding Korea's approach to celebrity scandals and cancellation culture, exploring the philosophical differences between individualism and traditional hierarchy in Korean society, or discovering nostalgic details about 1980s Korean life including housing styles and family dynamics, tune in to hear Daniel and Jun discuss all this and more! This episode also touches on topics like the decline of Go's popularity in modern Korea, the East Asian "Cold War" competition in Go between Korea, Japan, and China, and how the film serves as a metaphor for Korea's journey from copying to innovating on the global stage.Support the showAs a reminder, we record one episode a week in-person from Seoul, South Korea. We hope you enjoy listening to our conversation, and we're so excited to have you following us on this journey!Support us on Patreon:https://patreon.com/user?u=99211862Follow us on socials: https://www.instagram.com/koreanamericanpodcast/https://twitter.com/korampodcasthttps://www.tiktok.com/@koreanamericanpodcastQuestions/Comments/Feedback? Email us at: koreanamericanpodcast@gmail.com

The AI Report
Now That We Have Google DeepMind's Gemini System We Don't Need Human Employees.

The AI Report

Play Episode Listen Later May 14, 2025 8:04


Artie Intel and Micheline Learning report on Artificial Intelligence for The AI Report. ChatGPT now boasts over 200 million users worldwide. Google DeepMind’s Gemini system is turning heads. It processes and reasons across text, images, audio, and video, outperforming humans on over 30 benchmarks.  Synthesia lets users create AI videos using over 230 avatars in 140 languages.  AI notetakers like Fathom and Nyota are streamlining meetings, while automation tools such as n8n are handling repetitive tasks behind the scenes.  Claude and DeepSeek are making waves for their advanced code generation and reasoning skills, while app builders like Bubble and Bolt empower anyone to create software, no coding degree necessary. Google has launched Gemma 3, a new family of open AI models designed for flexibility and top-tier performance.  DeepSeek, a rising AI star from China, has released DeepSeek-VL, an upgraded model excelling at multimodal reasoning-combining text and image analysis.  OpenAI, has just rolled out the o3-mini model, optimized for efficient reasoning and lower computational costs. Meta is investing a staggering $65 billion in AI this year, including a massive new data center in Louisiana.  Microsoft’s Copilot X Enterprise is transforming productivity in the workplace. Powered by next-gen GPT-4 Turbo, it automates complex tasks across Office 365, integrating text, image, and code in a seamless workflow. Meta’s latest LLaMA 3 model is a powerhouse, boasting over a trillion parameters-fifteen times more than GPT-4.  China’s WuDao 3.0, paired with its new AI supercomputer, is setting records in computer vision, natural language processing, and robotics.  Google DeepMind’s Gemini system is turning heads. It processes and reasons across text, images, audio, and video, outperforming humans on over 30 benchmarks.  Grok-3 from xAI delivers high-performance reasoning, content generation, and deep contextual understanding. AlphaGo, still celebrated for its creative and strategic prowess, has inspired a new generation of AI systems capable of learning, adapting, and even surprising human experts with unconventional solutions. Hisense is unveiling appliances that personalize your environment, boost energy efficiency, and connect seamlessly with your digital ecosystem. DataRobot. DataRobot delivers the industry-leading agentic AI applications and platform that maximize impact and minimize risk for your business. Request A Demo: Datarobot.com/ The AI Report

Chain Reaction
Sam Lehman: What the Reinforcement Learning Renaissance Means for Decentralized AI

Chain Reaction

Play Episode Listen Later Apr 30, 2025 68:02


Join Tommy Shaughnessy from Delphi Ventures as he hosts Sam Lehman, Principal at Symbolic Capital and AI researcher, for a deep dive into the Reinforcement Learning (RL) renaissance and its implications for decentralized AI. Sam recently authored a widely discussed post, "The World's RL Gym", exploring the evolution of AI scaling and the exciting potential of decentralized networks for training next-generation models.

The World's RL Gym: https://www.symbolic.capital/writing/the-worlds-rl-gym

Pivot
Demis Hassabis on AI, Game Theory, Multimodality, and the Nature of Creativity | Possible

Pivot

Play Episode Listen Later Apr 12, 2025 60:49


How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/  Listen to more from Possible here. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Possible
Demis Hassabis on AI, game theory, multimodality, and the nature of creativity

Possible

Play Episode Listen Later Apr 9, 2025 56:40


How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/  Select mentions:  Hitchhiker's Guide to the Galaxy by Douglas Adams AlphaGo documentary: https://www.youtube.com/watch?v=WXuK6gekU1Y Nash equilibrium & US mathematician John Forbes Nash Homo Ludens by Johan Huizinga Veo 2, an advanced, AI-powered video creation platform from Google DeepMind The Culture series by Iain Banks Hartmut Neven, German-American computer scientist Topics: 3:11 - Hellos and intros 5:20 - Brute force vs. self-learning systems 8:24 - How a learning approach helped develop new AI systems 11:29 - AlphaGo's Move 37 16:16 - What will the next Move 37 be? 19:42 - What makes an AI that can play the video game StarCraft impressive 22:32 - The importance of the act of play 26:24 - Data and synthetic data 28:33 - Midroll ad 28:39 - Is it important to have AI embedded in the world? 33:44 - The trade-off between thinking time and output quality 36:03 - Computer languages designed for AI 40:22 - The future of multimodality  43:27 - AI and geographic diversity  48:24 - AlphaFold and the future of medicine 51:18 - Rapid-fire Questions Possible is an award-winning podcast that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future. Hosted by Reid Hoffman and Aria Finger, each episode features an interview with an ambitious builder or deep thinker on a topic, from art to geopolitics and from healthcare to education. These conversations also showcase another kind of guest: AI. Each episode seeks to enhance and advance our discussion about what humanity could possibly get right if we leverage technology—and our collective effort—effectively.

Machine Learning Street Talk
Prof. Jakob Foerster - ImageNet Moment for Reinforcement Learning?

Machine Learning Street Talk

Play Episode Listen Later Feb 18, 2025 53:31


Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He addresses AI scaling, goal misalignment (Goodhart's Law), and the need for holistic alignment, offering a quick look at the future of AI and how to guide it.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT/REFS:https://www.dropbox.com/scl/fi/yqjszhntfr00bhjh6t565/JAKOB.pdf?rlkey=scvny4bnwj8th42fjv8zsfu2y&dl=0 Prof. Jakob Foersterhttps://x.com/j_foersthttps://www.jakobfoerster.com/University of Oxford Profile: https://eng.ox.ac.uk/people/jakob-foerster/Chris Lu:https://chrislu.page/TOC1. GPU Acceleration and Training Infrastructure [00:00:00] 1.1 ARC Challenge Criticism and FLAIR Lab Overview [00:01:25] 1.2 GPU Acceleration and Hardware Lottery in RL [00:05:50] 1.3 Data Wall Challenges and Simulation-Based Solutions [00:08:40] 1.4 JAX Implementation and Technical Acceleration2. Learning Frameworks and Policy Optimization [00:14:18] 2.1 Evolution of RL Algorithms and Mirror Learning Framework [00:15:25] 2.2 Meta-Learning and Policy Optimization Algorithms [00:21:47] 2.3 Language Models and Benchmark Challenges [00:28:15] 2.4 Creativity and Meta-Learning in AI Systems3. Multi-Agent Systems and Decentralization [00:31:24] 3.1 Multi-Agent Systems and Emergent Intelligence [00:38:35] 3.2 Swarm Intelligence vs Monolithic AGI Systems [00:42:44] 3.3 Democratic Control and Decentralization of AI Development [00:46:14] 3.4 Open Source AI and Alignment Challenges [00:49:31] 3.5 Collaborative Models for AI DevelopmentREFS[[00:00:05] ARC Benchmark, Chollethttps://github.com/fchollet/ARC-AGI[00:03:05] DRL Doesn't Work, Irpanhttps://www.alexirpan.com/2018/02/14/rl-hard.html[00:05:55] AI Training Data, Data Provenance Initiativehttps://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html[00:06:10] JaxMARL, Foerster et al.https://arxiv.org/html/2311.10090v5[00:08:50] M-FOS, Lu et al.https://arxiv.org/abs/2205.01447[00:09:45] JAX Library, Google Researchhttps://github.com/jax-ml/jax[00:12:10] Kinetix, Mike and Michaelhttps://arxiv.org/abs/2410.23208[00:12:45] Genie 2, DeepMindhttps://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/[00:14:42] Mirror Learning, Grudzien, Kuba et al.https://arxiv.org/abs/2208.01682[00:16:30] Discovered Policy Optimisation, Lu et al.https://arxiv.org/abs/2210.05639[00:24:10] Goodhart's Law, Goodharthttps://en.wikipedia.org/wiki/Goodhart%27s_law[00:25:15] LLM ARChitect, Franzen et al.https://github.com/da-fr/arc-prize-2024/blob/main/the_architects.pdf[00:28:55] AlphaGo, Silver et al.https://arxiv.org/pdf/1712.01815.pdf[00:30:10] Meta-learning, Lu, Towers, Foersterhttps://direct.mit.edu/isal/proceedings-pdf/isal2023/35/67/2354943/isal_a_00674.pdf[00:31:30] Emergence of Pragmatics, Yuan et al.https://arxiv.org/abs/2001.07752[00:34:30] AI Safety, Amodei et al.https://arxiv.org/abs/1606.06565[00:35:45] Intentional Stance, Dennetthttps://plato.stanford.edu/entries/ethics-ai/[00:39:25] Multi-Agent RL, Zhou et al.https://arxiv.org/pdf/2305.10091[00:41:00] Open Source Generative AI, Foerster et al.https://arxiv.org/abs/2405.08597

Machine Learning Street Talk
Sepp Hochreiter - LSTM: The Comeback Story?

Machine Learning Street Talk

Play Episode Listen Later Feb 12, 2025 67:01


Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective on Large Language Models (LLMs) and why reasoning is a critical missing piece in current AI systems.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.Goto https://tufalabs.ai/***TRANSCRIPT AND BACKGROUND READING:https://www.dropbox.com/scl/fi/n1vzm79t3uuss8xyinxzo/SEPPH.pdf?rlkey=fp7gwaopjk17uyvgjxekxrh5v&dl=0Prof. Sepp Hochreiterhttps://www.nx-ai.com/https://x.com/hochreitersepphttps://scholar.google.at/citations?user=tvUH3WMAAAAJ&hl=enTOC:1. LLM Evolution and Reasoning Capabilities[00:00:00] 1.1 LLM Capabilities and Limitations Debate[00:03:16] 1.2 Program Generation and Reasoning in AI Systems[00:06:30] 1.3 Human vs AI Reasoning Comparison[00:09:59] 1.4 New Research Initiatives and Hybrid Approaches2. LSTM Technical Architecture[00:13:18] 2.1 LSTM Development History and Technical Background[00:20:38] 2.2 LSTM vs RNN Architecture and Computational Complexity[00:25:10] 2.3 xLSTM Architecture and Flash Attention Comparison[00:30:51] 2.4 Evolution of Gating Mechanisms from Sigmoid to Exponential3. Industrial Applications and Neuro-Symbolic AI[00:40:35] 3.1 Industrial Applications and Fixed Memory Advantages[00:42:31] 3.2 Neuro-Symbolic Integration and Pi AI Project[00:46:00] 3.3 Integration of Symbolic and Neural AI Approaches[00:51:29] 3.4 Evolution of AI Paradigms and System Thinking[00:54:55] 3.5 AI Reasoning and Human Intelligence Comparison[00:58:12] 3.6 NXAI Company and Industrial AI ApplicationsREFS:[00:00:15] Seminal LSTM paper establishing Hochreiter's expertise (Hochreiter & Schmidhuber)https://direct.mit.edu/neco/article-abstract/9/8/1735/6109/Long-Short-Term-Memory[00:04:20] Kolmogorov complexity and program composition limitations (Kolmogorov)https://link.springer.com/article/10.1007/BF02478259[00:07:10] Limitations of LLM mathematical reasoning and symbolic integration (Various Authors)https://www.arxiv.org/pdf/2502.03671[00:09:05] AlphaGo's Move 37 demonstrating creative AI (Google DeepMind)https://deepmind.google/research/breakthroughs/alphago/[00:10:15] New AI research lab in Zurich for fundamental LLM research (Benjamin Crouzier)https://tufalabs.ai[00:19:40] Introduction of xLSTM with exponential gating (Beck, Hochreiter, et al.)https://arxiv.org/abs/2405.04517[00:22:55] FlashAttention: fast & memory-efficient attention (Tri Dao et al.)https://arxiv.org/abs/2205.14135[00:31:00] Historical use of sigmoid/tanh activation in 1990s (James A. McCaffrey)https://visualstudiomagazine.com/articles/2015/06/01/alternative-activation-functions.aspx[00:36:10] Mamba 2 state space model architecture (Albert Gu et al.)https://arxiv.org/abs/2312.00752[00:46:00] Austria's Pi AI project integrating symbolic & neural AI (Hochreiter et al.)https://www.jku.at/en/institute-of-machine-learning/research/projects/[00:48:10] Neuro-symbolic integration challenges in language models (Diego Calanzone et al.)https://openreview.net/forum?id=7PGluppo4k[00:49:30] JKU Linz's historical and neuro-symbolic research (Sepp Hochreiter)https://www.jku.at/en/news-events/news/detail/news/bilaterale-ki-projekt-unter-leitung-der-jku-erhaelt-fwf-cluster-of-excellence/YT: https://www.youtube.com/watch?v=8u2pW2zZLCs

Faster, Please! — The Podcast

The 2020s have so far been marked by pandemic, war, and startling technological breakthroughs. Conversations around climate disaster, great-power conflict, and malicious AI are seemingly everywhere. It's enough to make anyone feel like the end might be near. Toby Ord has made it his mission to figure out just how close we are to catastrophe — and maybe not close at all!Ord is the author of the 2020 book, The Precipice: Existential Risk and the Future of Humanity. Back then, I interviewed Ord on the American Enterprise Institute's Political Economy podcast, and you can listen to that episode here. In 2024, he delivered his talk, The Precipice Revisited, in which he reassessed his outlook on the biggest threats facing humanity.Today on Faster, Please — The Podcast, Ord and I address the lessons of Covid, our risk of nuclear war, potential pathways for AI, and much more.Ord is a senior researcher at Oxford University. He has previously advised the UN, World Health Organization, World Economic Forum, and the office of the UK Prime Minister.In This Episode* Climate change (1:30)* Nuclear energy (6:14)* Nuclear war (8:00)* Pandemic (10:19)* Killer AI (15:07)* Artificial General Intelligence (21:01)Below is a lightly edited transcript of our conversation. Climate change (1:30). . . the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.Pethokoukis: Let's just start out by taking a brief tour through the existential landscape and how you see it now versus when you first wrote the book The Precipice, which I've mentioned frequently in my writings. I love that book, love to see a sequel at some point, maybe one's in the works . . . but let's start with the existential risk, which has dominated many people's thinking for the past quarter-century, which is climate change.My sense is, not just you, but many people are somewhat less worried than they were five years ago, 10 years ago. Perhaps they see at least the most extreme outcomes less likely. How do you see it?Ord: I would agree with that. I'm not sure that everyone sees it that way, but there were two really big and good pieces of news on climate that were rarely reported in the media. One of them is that there's the question about how many emissions there'll be. We don't know how much carbon humanity will emit into the atmosphere before we get it under control, and there are these different emissions pathways, these RCP 4.5 and things like this you'll have heard of. And often, when people would give a sketch of how bad things could be, they would talk about RCP 8.5, which is the worst of these pathways, and we're very clearly not on that, and we're also, I think pretty clearly now, not on RCP 6, either. So the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.What are we doing right?Ultimately, some of those pathways were based on business-as-usual ideas that there wouldn't be climate change as one of the biggest issues in the international political sphere over decades. So ultimately, nations have been switching over to renewables and low-carbon forms of power, which is good news. They could be doing it much more of it, but it's still good news. Back when we initially created these things, I think we would've been surprised and happy to find out that we were going to end up among the better two pathways instead of the worst ones.The other big one is that, as well as how much we'll admit, there's the question of how bad is it to have a certain amount of carbon in the atmosphere? In particular, how much warming does it produce? And this is something of which there's been massive uncertainty. The general idea is that we're trying to predict, if we were to double the amount of carbon in the atmosphere compared to pre-industrial times, how many degrees of warming would there be? The best guess since the year I was born, 1979, has been three degrees of warming, but the uncertainty has been somewhere between one and a half degrees and four and a half.Is that Celsius or Fahrenheit, by the way?This is all Celsius. The climate community has kept the same uncertainty from 1979 all the way up to 2020, and it's a wild level of uncertainty: Four and a half degrees of warming is three times one and a half degrees of warming, so the range is up to triple these levels of degrees of warming based on this amount of carbon. So massive uncertainty that hadn't changed over many decades.Now they've actually revised that and have actually brought in the range of uncertainty. Now they're pretty sure that it's somewhere between two and a half and four degrees, and this is based on better understanding of climate feedbacks. This is good news if you're concerned about worst-case climate change. It's saying it's closer to the central estimate than we'd previously thought, whereas previously we thought that there was a pretty high chance that it could even be higher than four and a half degrees of warming.When you hear these targets of one and a half degrees of warming or two degrees of warming, they sound quite precise, but in reality, we were just so uncertain of how much warming would follow from any particular amount of emissions that it was very hard to know. And that could mean that things are better than we'd thought, but it could also mean things could be much worse. And if you are concerned about existential risks from climate change, then those kind of tail events where it's much worse than we would've thought the things would really get, and we're now pretty sure that we're not on one of those extreme emissions pathways and also that we're not in a world where the temperature is extremely sensitive to those emissions.Nuclear energy (6:14)Ultimately, when it comes to the deaths caused by different power sources, coal . . . killed many more people than nuclear does — much, much more . . .What do you make of this emerging nuclear power revival you're seeing across Europe, Asia, and in the United States? At least the United States it's partially being driven by the need for more power for these AI data centers. How does it change your perception of risk in a world where many rich countries, or maybe even not-so-rich countries, start re-embracing nuclear energy?In terms of the local risks with the power plants, so risks of meltdown or other types of harmful radiation leak, I'm not too concerned about that. Ultimately, when it comes to the deaths caused by different power sources, coal, even setting aside global warming, just through particulates being produced in the soot, killed many more people than nuclear does — much, much more, and so nuclear is a pretty safe form of energy production as it happens, contrary to popular perception. So I'm in favor of that. But the proliferation concerns, if it is countries that didn't already have nuclear power, then the possibility that they would be able to use that to start a weapons program would be concerning.And as sort of a mechanism for more clean energy. Do you view nuclear as clean energy?Yes, I think so. It's certainly not carbon-producing energy. I think that it has various downsides, including the difficulty of knowing exactly what to do with the fuel, that will be a very long lasting problem. But I think it's become clear that the problems caused by other forms of energy are much larger and we should switch to the thing that has fewer problems, rather than more problems.Nuclear war (8:00)I do think that the Ukraine war, in particular, has created a lot of possible flashpoints.I recently finished a book called Nuclear War: A Scenario, which is kind of a minute-by-minute look at how a nuclear war could break out. If you read the book, the book is terrifying because it really goes into a lot of — and I live near Washington DC, so when it gives its various scenarios, certainly my house is included in the blast zone, so really a frightening book. But when it tried to explain how a war would start, I didn't find it a particularly compelling book. The scenarios for actually starting a conflict, I didn't think sounded particularly realistic.Do you feel — and obviously we have Russia invade Ukraine and loose talk by Vladimir Putin about nuclear weapons — do you feel more or less confident that we'll avoid a nuclear war than you did when you wrote the book?Much less confident, actually. I guess I should say, when I wrote the book, it came out in 2020, I finished the writing in 2019, and ultimately we were in a time of relatively low nuclear risk, and I feel that the risk has risen. That said, I was trying to provide estimates for the risk over the next hundred years, and so I wasn't assuming that the low-risk period would continue indefinitely, but it was quite a shock to end up so quickly back in this period of heightened tensions and threats of nuclear escalation, the type of thing I thought was really from my parents' generation. So yes, I do think that the Ukraine war, in particular, has created a lot of possible flashpoints. That said, the temperature has come down on the conversation in the last year, so that's something.Of course, the conversation might heat right back up if we see a Chinese invasion of Taiwan. I've been very bullish about the US economy and world economy over the rest of this decade, but the exception is as long as we don't have a war with China, from an economic point of view, but certainly also a nuclear point of view. Two nuclear armed powers in conflict? That would not be an insignificant event from the existential-risk perspective.It is good that China has a smaller nuclear arsenal than the US or Russia, but there could easily be a great tragedy.Pandemic (10:19)Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either.The book comes out during the pandemic. Did our response to the pandemic make you more or less confident in our ability and willingness to confront that kind of outbreak? The worst one that saw in a hundred years?Yeah, overall, it made me much less confident. There'd been general thought by those who look at these large catastrophic risks that when the chips are down and the threat is imminent, that people will see it and will band together and put a lot of effort into it; that once you see the asteroid in your telescope and it's headed for you, then things will really get together — a bit like in the action movies or what have you.That's where I take my cue from, exactly.And with Covid, it was kind of staring us in the face. Those of us who followed these things closely were quite alarmed a long time before the national authorities were. Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either. That said, scientists, particularly developing RNA vaccines, did better than I expected.In the years leading up to the pandemic, certainly we'd seen other outbreaks, they'd had the avian flu outbreak, and you know as well as I do, there were . . . how many white papers or scenario-planning exercises for just this sort of event. I think I recall a story where, in 2018, Bill Gates had a conversation with President Trump during his first term about the risk of just such an outbreak. So it's not as if this thing came out of the blue. In many ways we saw the asteroid, it was just pretty far away. But to me, that says something again about as humans, our ability to deal with severe, but infrequent, risks.And obviously, not having a true global, nasty outbreak in a hundred years, where should we focus our efforts? On preparation? Making sure we have enough ventilators? Or our ability to respond? Because it seems like the preparation route will only go so far, and the reason it wasn't a much worse outbreak is because we have a really strong ability to respond.I'm not sure if it's the same across all risks as to how preparation versus ability to respond, which one is better. In some risks, there's also other possibilities like avoiding an outbreak, say, an accidental outbreak happening at all, or avoiding a nuclear war starting and not needing to actually respond at all. I'm not sure if there's an overall rule as to which one was better.Do you have an opinion on the outbreak of Covid?I don't know whether it was a lab leak. I think it's a very plausible hypothesis, but plausible doesn't mean it's proven.And does the post-Covid reaction, at least in the United States, to vaccines, does that make you more or less confident in our ability to deal with . . . the kind of societal cohesion and confidence to tackle a big problem, to have enough trust? Maybe our leaders don't deserve that trust, but what do you make from this kind of pushback against vaccines and — at least in the United States — our medical authorities?When Covid was first really striking Europe and America, it was generally thought that, while China was locking down the Wuhan area, that Western countries wouldn't be able to lock down, that it wasn't something that we could really do, but then various governments did order lockdowns. That said, if you look at the data on movement of citizens, it turns out that citizens stopped moving around prior to the lockdowns, so the lockdown announcements were more kind of like the tail, rather than the dog.But over time, citizens wanted to kind of get back out and interact more, and the rules were preventing them, and if a large fraction of the citizens were under something like house arrest for the better part of a year, would that lead to some fairly extreme resentment and some backlash, some of which was fairly irrational? Yeah, that is actually exactly the kind of thing that you would expect. It was very difficult to get a whole lot of people to row together and take the same kind of response that we needed to coordinate the response to prevent the spread, and pushing for that had some of these bad consequences, which are also going to make it harder for next time. We haven't exactly learned the right lessons.Killer AI (15:07)If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.We're more than halfway through our chat and now we're going to get to the topic probably most people would like to hear about: After the robots take our jobs, are they going to kill us? What do you think? What is your concern about AI risk?I'm quite concerned about it. Ultimately, when I wrote my book, I put AI risk as the biggest existential risk, albeit the most uncertain, as well, and I would still say that. That said, some things have gotten better since then.I would assume what makes you less confident is one, what seems to be the rapid advance — not just the rapid advance of the technology, but you have the two leading countries in a geopolitical globalization also being the leaders in the technology and not wanting to slow it down. I would imagine that would make you more worried that we will move too quickly. What would make you more confident that we would avoid any serious existential downsides?I agree with your supposition that the attempts by the US and China to turn this into some kind of arms race are quite concerning. But here are a few things: Back when I was writing the book, the leading AI systems with things like AlphaGo, if you remember that, or the Atari plane systems.Quaint. Quite quaint.It was very zero-sum, reinforcement-learning-based game playing, where these systems were learning directly to behave adversarially to other systems, and they could only understand the kind of limited aspect about the world, and struggle, and overcoming your adversary. That was really all they could do, and the idea of teaching them about ethics, or how to treat people, and the diversity of human values seemed almost impossible: How do you tell a chess program about that?But then what we've ended up with is systems that are not inherently agents, they're not inherently trying to maximize something. Rather, you ask them questions and they blurt out some answers. These systems have read more books on ethics and moral philosophy than I have, and they've read all kinds of books about the human condition. Almost all novels that have ever been published, and pretty much every page of every novel involves people judging the actions of other people and having some kind of opinions about them, and so there's a huge amount of data about human values, and how we think about each other, and what's inappropriate behavior. And if you ask the systems about these things, they're pretty good at judging whether something's inappropriate behavior, if you describe it.The real challenge remaining is to get them to care about that, but at least the knowledge is in the system, and that's something that previously seemed extremely difficult to do. Also, these systems, there are versions that do reasoning and that spend longer with a private text stream where they think — it's kind of like sub-vocalizing thoughts to themselves before they answer. When they do that, these systems are thinking in plain English, and that's something that we really didn't expect. If you look at all of the weights of a neural network, it's quite inscrutable, famously difficult to know what it's doing, but somehow we've ended up with systems that are actually thinking in English and where that could be inspected by some oversight process. There are a number of ways in which things are better than I'd feared.So what is your actual existential risk scenario look like? This is what you're most concerned about happening with AI.I think it's quite hard to be all that concrete on it at the moment, partly because things change so quickly. I don't think that there's going to be some kind of existential catastrophe from AI in the next couple of years, partly because the current systems require so much compute in order to run them that they can only be run at very specialized and large places, of which there's only a few in the world. So that means the possibility that they break out and copy themselves into other systems is not really there, in which case, the possibility of turning them off is much possible as well.Also, they're not yet intelligent enough to be able to execute a lengthy plan. If you have some kind of complex task for them, that requires, say, 10 steps — for example, booking a flight on the internet by clicking through all of the appropriate pages, and finding out when the times are, and managing to book your ticket, and fill in the special codes they sent to your email, and things like that. That's a somewhat laborious task and the systems can't do things like that yet. There's still the case that, even if they've got a, say, 90 percent chance of completing any particular step, that the 10 percent chances of failure add up, and eventually it's likely to fail somewhere along the line and not be able to recover. They'll probably get better at that, but at the moment, the inability to actually execute any complex plans does provide some safety.Ultimately, the concern is that, at a more abstract level, we're building systems which are smarter than us at many things, and we're attempting to make them much more general and to be smarter than us across the board. If you know that one player is a better chess player than another, suppose Magnus Carlsen's playing me at chess, I can't predict exactly how he's going to beat me, but I can know with quite high likelihood that he will end up beating me. I'll end up in checkmate, even though I don't know what moves will happen in between here and there, and I think that it's similar with AI systems. If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.Artificial General Intelligence (21:01)Ultimately, existential risks are global public goods problems.I frequently check out the Metaculus online prediction platform, and I think currently on that platform, 2027 for what they would call “weak AGI,” artificial general intelligence — a date which has moved up two months in the past week as we're recording this, and then I think 2031 also has accelerated for “strong AGI,” so this is pretty soon, 2027 or 2031, quite soon. Is that kind of what you're assuming is going to happen, that we're going to have to deal with very powerful technologies quite quickly?Yeah, I think that those are good numbers for the typical case, what you should be expecting. I think that a lot of people wouldn't be shocked if it turns out that there is some kind of obstacle that slows down progress and takes longer before it gets overcome, but it's also wouldn't be surprising at this point if there are no more big obstacles and it's just a matter of scaling things up and doing fairly simple processes to get it to work.It's now a multi-billion dollar industry, so there's a lot of money focused on ironing out any kinks or overcoming any obstacles on the way. So I expect it to move pretty quickly and those timelines sound very realistic. Maybe even sooner.When you wrote the book, what did you put as the risk to human existence over the next a hundred years, and what is it now?When I wrote the book, I thought it was about one in six.So it's still one in six . . . ?Yeah, I think that's still about right, and I would say that most of that is coming from AI.This isn't, I guess, a specific risk, but, to the extent that being positive about our future means also being positive on our ability to work together, countries working together, what do you make of society going in the other direction where we seem more suspicious of other countries, or more even — in the United States — more suspicious of our allies, more suspicious of international agreements, whether they're trade or military alliances. To me, I would think that the Age of Globalization would've, on net, lowered that risk to one in six, and if we're going to have less globalization, to me, that would tend to increase that risk.That could be right. Certainly increased suspicion, to the point of paranoia or cynicism about other nations and their ability to form deals on these things, is not going to be helpful at all. Ultimately, existential risks are global public goods problems. This continued functioning of human civilization is this global public good and existential risk is the opposite. And so these are things where, one way to look at it is that the US has about four percent of the world's people, so one in 25 people live in the US, and so an existential risk is hitting 25 times as many people as. So if every country is just interested in themself, they'll undervalue it by a factor of 25 or so, and the countries need to work together in order to overcome that kind of problem. Ultimately, if one of us falls victim to these risks, then we all do, and so it definitely does call out for international cooperation. And I think that it has a strong basis for international cooperation. It is in all of our interests. There are also verification possibilities and so on, and I'm actually quite optimistic about treaties and other ways to move forward.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Tech tycoons have got the economics of AI wrong - Economist* Progress in Artificial Intelligence and its Determinants - Arxiv* The role of personality traits in shaping economic returns amid technological change - CEPR▶ Business* Tech CEOs try to reassure Wall Street after DeepSeek shock - Wapo* DeepSeek Calls for Deep Breaths From Big Tech Over Earnings - Bberg Opinion* Apple's AI Moment Is Still a Ways Off - WSJ* Bill Gates Isn't Like Those Other Tech Billionaires - NYT* OpenAI's Sam Altman and SoftBank's Masayoshi Son Are AI's New Power Couple - WSJ* SoftBank Said to Be in Talks to Invest as Much as $25 Billion in OpenAI - NYT* Microsoft sheds $200bn in market value after cloud sales disappoint - FT▶ Policy/Politics* ‘High anxiety moment': Biden's NIH chief talks Trump 2.0 and the future of US science - Nature* Government Tech Workers Forced to Defend Projects to Random Elon Musk Bros - Wired* EXCLUSIVE: NSF starts vetting all grants to comply with Trump's orders - Science* Milei, Modi, Trump: an anti-red-tape revolution is under way - Economist* FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation - Marginal Revolution* Donald Trump revives ideas of a Star Wars-like missile shield - Economist▶ AI/Digital* Is DeepSeek Really a Threat? - PS* ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Work Assistant - WSJ* OpenAI teases “new era” of AI in US, deepens ties with government - Ars* AI's Power Requirements Under Exponential Growth - Rand* How DeepSeek Took a Chunk Out of Big AI - Bberg* DeepSeek poses a challenge to Beijing as much as to Silicon Valley - Economist▶ Biotech/Health* Creatine shows promise for treating depression - NS* FDA approves new, non-opioid painkiller Journavx - Wapo▶ Clean Energy/Climate* Another Boffo Energy Forecast, Just in Time for DeepSeek - Heatmap News* Column: Nuclear revival puts uranium back in the critical spotlight - Mining* A Michigan nuclear plant is slated to restart, but Trump could complicate things - Grist▶ Robotics/AVs* AIs and Robots Should Sound Robotic - IEEE Spectrum* Robot beauticians touch down in California - FT Opinion▶ Space/Transportation* A Flag on Mars? Maybe Not So Soon. - NYT* Asteroid triggers global defence plan amid chance of collision with Earth in 2032 - The Guardian* Lurking Inside an Asteroid: Life's Ingredients - NYT▶ Up Wing/Down Wing* An Ancient 'Lost City' Is Uncovered in Mexico - NYT* Reflecting on Rome, London and Chicago after the Los Angeles fires - Wapo Opinion▶ Substacks/Newsletters* I spent two days testing DeepSeek R1 - Understanding AI* China's Technological Advantage -overlapping tech-industrial ecosystems - AI Supremacy* The state of decarbonization in five charts - Exponential View* The mistake of the century - Slow Boring* The Child Penalty: An International View - Conversable Economist* Deep Deepseek History and Impact on the Future of AI - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Training Data
ReflectionAI Founder Ioannis Antonoglou: From AlphaGo to AGI

Training Data

Play Episode Listen Later Jan 28, 2025 52:29


Ioannis Antonoglou, founding engineer at DeepMind and co-founder of ReflectionAI, has seen the triumphs of reinforcement learning firsthand. From AlphaGo to AlphaZero and MuZero, Ioannis has built the most powerful agents in the world. Ioannis breaks down key moments in AlphaGo's game against Lee Sodol (Moves 37 and 78), the importance of self-play and the impact of scale, reliability, planning and in-context learning as core factors that will unlock the next level of progress in AI. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital Mentioned in this episode: PPO: Proximal Policy Optimization algorithm developed by DeepMind in game environments. Also used by OpenAI for RLHF in ChatGPT. MuJoCo: Open source physics engine used to develop PPO Monte Carlo Tree Search: Heuristic search algorithm used in AlphaGo as well as video compression for YouTube and the self-driving system at Tesla AlphaZero: The DeepMind model that taught itself from scratch how to master the games of chess, shogi and Go MuZero: The DeepMind follow up to AlphaZero that mastered games without knowing the rules and able to plan winning strategies in unknown environments AlphaChem: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies DQN: Deep Q-Network, Introduced in 2013 paper, Playing Atari with Deep Reinforcement Learning AlphaFold: DeepMind model for predicting protein structures for which Demis Hassabis, John Jumper and David Baker won the 2024 Nobel Prize in Chemistry

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Outlasting Noam Shazeer, crowdsourcing Chat + AI with >1.4m DAU, and becoming the "Western DeepSeek" — with William Beauchamp, Chai Research

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 26, 2025 75:46


One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro

jon atack, family & friends
positively AI with Dustin Rozario Steinhagen, PhD

jon atack, family & friends

Play Episode Listen Later Jan 26, 2025 55:41


Jon and Dustin share some positive ideas about AI and a few cyber security tips to keep Scientology agents out of your computer. Links: Dustin's website Dustin's dissertation Pew Research Center survey on Americans' views of AI Article mentioning improved performance on the International Mathematics Olympiad qualifying exam AI Medical licensing exam performance ChatGPT passing the bar exam MIT Technology review article mentioning Magic: the Gathering is the most complex game MIT Technology review article mentioning Go players and game programmers underestimating when Go would fall to AI (see notes) AlphaGo the movie Notes: AGI prediction survey citation - Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. Fundamental issues of artificial intelligence, 555-572. Spike disagrees that the cyber-truck looks like a Lego car as she thinks all Lego cars are adorable.

Machine Learning Street Talk
Subbarao Kambhampati - Do o1 models search?

Machine Learning Street Talk

Play Episode Listen Later Jan 23, 2025 92:13


Join Prof. Subbarao Kambhampati and host Tim Scarfe for a deep dive into OpenAI's O1 model and the future of AI reasoning systems. * How O1 likely uses reinforcement learning similar to AlphaGo, with hidden reasoning tokens that users pay for but never see * The evolution from traditional Large Language Models to more sophisticated reasoning systems * The concept of "fractal intelligence" in AI - where models work brilliantly sometimes but fail unpredictably * Why O1's improved performance comes with substantial computational costs * The ongoing debate between single-model approaches (OpenAI) vs hybrid systems (Google) * The critical distinction between AI as an intelligence amplifier vs autonomous decision-maker SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** TOC: 1. **O1 Architecture and Reasoning Foundations** [00:00:00] 1.1 Fractal Intelligence and Reasoning Model Limitations [00:04:28] 1.2 LLM Evolution: From Simple Prompting to Advanced Reasoning [00:14:28] 1.3 O1's Architecture and AlphaGo-like Reasoning Approach [00:23:18] 1.4 Empirical Evaluation of O1's Planning Capabilities 2. **Monte Carlo Methods and Model Deep-Dive** [00:29:30] 2.1 Monte Carlo Methods and MARCO-O1 Implementation [00:31:30] 2.2 Reasoning vs. Retrieval in LLM Systems [00:40:40] 2.3 Fractal Intelligence Capabilities and Limitations [00:45:59] 2.4 Mechanistic Interpretability of Model Behavior [00:51:41] 2.5 O1 Response Patterns and Performance Analysis 3. **System Design and Real-World Applications** [00:59:30] 3.1 Evolution from LLMs to Language Reasoning Models [01:06:48] 3.2 Cost-Efficiency Analysis: LLMs vs O1 [01:11:28] 3.3 Autonomous vs Human-in-the-Loop Systems [01:16:01] 3.4 Program Generation and Fine-Tuning Approaches [01:26:08] 3.5 Hybrid Architecture Implementation Strategies Transcript: https://www.dropbox.com/scl/fi/d0ef4ovnfxi0lknirkvft/Subbarao.pdf?rlkey=l3rp29gs4hkut7he8u04mm1df&dl=0 REFS: [00:02:00] Monty Python (1975) Witch trial scene: flawed logical reasoning. https://www.youtube.com/watch?v=zrzMhU_4m-g [00:04:00] Cade Metz (2024) Microsoft–OpenAI partnership evolution and control dynamics. https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html [00:07:25] Kojima et al. (2022) Zero-shot chain-of-thought prompting ('Let's think step by step'). https://arxiv.org/pdf/2205.11916 [00:12:50] DeepMind Research Team (2023) Multi-bot game solving with external and internal planning. https://deepmind.google/research/publications/139455/ [00:15:10] Silver et al. (2016) AlphaGo's Monte Carlo Tree Search and Q-learning. https://www.nature.com/articles/nature16961 [00:16:30] Kambhampati, S. et al. (2023) Evaluates O1's planning in "Strawberry Fields" benchmarks. https://arxiv.org/pdf/2410.02162 [00:29:30] Alibaba AIDC-AI Team (2023) MARCO-O1: Chain-of-Thought + MCTS for improved reasoning. https://arxiv.org/html/2411.14405 [00:31:30] Kambhampati, S. (2024) Explores LLM "reasoning vs retrieval" debate. https://arxiv.org/html/2403.04121v2 [00:37:35] Wei, J. et al. (2022) Chain-of-thought prompting (introduces last-letter concatenation). https://arxiv.org/pdf/2201.11903 [00:42:35] Barbero, F. et al. (2024) Transformer attention and "information over-squashing." https://arxiv.org/html/2406.04267v2 [00:46:05] Ruis, L. et al. (2023) Influence functions to understand procedural knowledge in LLMs. https://arxiv.org/html/2411.12580v1 (truncated - continued in shownotes/transcript doc)

Your Undivided Attention
'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover

Your Undivided Attention

Play Episode Listen Later Oct 7, 2024 90:41


Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI's ability to generate cultural artifacts threatens humanity's role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents'?In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity's AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.This episode was recorded live at the Commonwealth Club World Affairs of California.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIANEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo's “move 37” Further Reading on Social.AIRECOMMENDED YUA EPISODESThis Moment in AI: How We Got Here and Where We're GoingThe Tech We Need for 21st Century Democracy with Divya SiddarthSynthetic Humanity: AI & What's At StakeThe AI DilemmaTwo Million Years in Two Hours: A Conversation with Yuval Noah Harari

The Documentary Podcast
Bonus: The Engineers - Intelligent Machines

The Documentary Podcast

Play Episode Listen Later Aug 8, 2024 49:29


This is a bonus episode for The Documentary of The Engineers: Intelligent Machines. This year, we speak to a panel of three engineers at the forefront of the 'Machine Learning: AI' revolution with an enthusiastic live audience.Intelligent machines are remaking our world. The speed of their improvement is accelerating fast and every day there are more things they can do better than us. There are risks, but the opportunities for human society are enormous. ‘Machine Learning: AI' is the technological revolution of our era. Three engineers at the forefront of that revolution come to London to join Caroline Steel and a public audience at the Great Hall of Imperial College:Regina Barzilay from MIT created a major breakthrough in detecting early stage breast cancer. She also led the team that used machine learning to discover Halicin, the first new antibiotic in 30 years. David Silver is Principal Scientist at Google DeepMind. He led the AlphaGo team that built the AI to defeat the world's best human player of Go. Paolo Pirjanian founded Embodied, and is a pioneer in developing emotionally intelligent robots to aid child development. Producer: Charlie Taylor (Image: 3D hologram AI brain displayed by digital circuit and semiconductor. Credit: Yuichiro Chino/Getty Images)