Podcasts about DeepMind

  • 1,100PODCASTS
  • 2,649EPISODES
  • 43mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 15, 2026LATEST
DeepMind

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about DeepMind

Show all podcasts related to deepmind

Latest podcast episodes about DeepMind

Wisdom of Crowds
Just How Worried Should We Be About AI?

Wisdom of Crowds

Play Episode Listen Later Feb 15, 2026 71:09


Damir and Sam are joined by Cambridge philosopher Henry Shevlin of the Leverhulme Centre for the Future of Intelligence for a raucous and rambling conversation about the state of artificial intelligence. Is it about to get conscious, take all of our jobs, and destroy the world? Or is all this industry hype?Henry starts off the conversation asserting that AI already has a kind of “agency,” even if it's not yet the full kind that some skeptics are looking for. Damir and Sam push back on AI's reliability and proclivity to hallucinations, and wonder whether AI can create anything genuinely novel or creative.The conversation turns to autonomy and risk. Can “artificial superintelligence” ever be reached, asks Sam? Henry points to AI coding agents already improving themselves. Damir objects to anthropomorphizing AI and prefers treating these systems as powerful tools capable runaway failures — but nothing more. Henry disagrees, ending the conversation with a plea for AIs getting consideration as moral entities at some point.Required Reading:* “Superintelligence: Paths, Dangers, Strategies,” by Nick Bostrom (Amazon).* The Creative Mind: Myths and Mechanisms, by Margaret Boden (Amazon).* “Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction,” by Minja Axelsson and Henry Shevlin (arxiv.org).* “Real Patterns,” by Daniel C. Dennett (Rutgers).* A relevant tweet by Séb Krier (X).* AlphaGo Move 37 analysis (DeepMind).* Conway's Game of Life (Wikipedia). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit wisdomofcrowds.live/subscribe

Computer America
DeepMind AI Decodes DNA, Carbon Robotics Large Plant Model, MIT 3D Print Home Beams w/ Ralph Bond

Computer America

Play Episode Listen Later Feb 13, 2026 39:46


Show Notes 2/13/2026AI model from Google's DeepMind reads recipe for life in DNA Source: BBC Link: https://www.bbc.com/news/articles/c39428dv18yoCarbon Robotics Launches the World's First-Ever Large Plant ModelSource: BusinessWire.com Link: https://www.businesswire.com/news/home/20260202630325/en/Carbon-Robotics-Launches-the-Worlds-First-Ever-Large-Plant-ModelYour future home might be framed with printed plasticSource: MIT News Link: https://news.mit.edu/2026/your-future-home-might-be-framed-with-printed-plastic-0203A new scan lets scientists see inside the human body in 3D color Source: ScienceDaily.comLink: https://www.sciencedaily.com/releases/2026/02/260204121550.htm3D-printed passive cooling system cools data centers without fans or pumps Source: Interesting EngineeringLink: https://interestingengineering.com/ai-robotics/3d-printed-passive-cooling-data-centersHow we're helping preserve the genetic information of endangered species with AI Source: Google's The Keyword BlogLink: https://blog.google/innovation-and-ai/technology/ai/ai-to-preserve-endangered-species/The Navy's Batwing Fighter Jet Promises Mach 4 Speed… But It's Still Just a ConceptSource: YD Design Link: https://www.yankodesign.com/2026/02/06/the-navys-batwing-fighter-jet-promises-mach-4-speed-but-its-still-just-a-concept/New study of chemical reactions in space 'could impact the [theories of the] origin of life in ways we hadn't thought of'Source: LiveScience.com Link: https://www.livescience.com/chemistry/complex-building-blocks-of-life-can-form-on-space-dust-offering-new-clues-to-the-origins-of-life

Coffee Break: Señal y Ruido
Ep455_B: 11F; Dinosaurios; Sag A*; Genes y Psiquiatría; AlphaGenome; ADN Origami

Coffee Break: Señal y Ruido

Play Episode Listen Later Feb 12, 2026 149:41


-Sag A* como materia oscura en vez de agujero negro (00:00) -11F Día de la mujer y la niña en ciencia (36:00) -Mapa genético de desórdenes psiquiátricos (1:05:00) -AlphaGenome de DeepMind promete revolucionar la medicina (1:21:00) -Vacunas origami de ADN (1:48:00)

AI Inside
How Smart Are Today's Coding Agents?

AI Inside

Play Episode Listen Later Feb 12, 2026 76:50


This episode is sponsored by Airia. Get started today at ⁠⁠⁠⁠⁠⁠⁠⁠⁠airia.com⁠⁠⁠⁠⁠⁠⁠⁠⁠. Jason Howell and Jeff Jarvis break down Claude Opus 4.6's new role as a financial‑research engine, discuss how GPT‑5.3 Codex is reshaping full‑stack coding workflows, and explore Matt Shumer's warning that AI agents will touch nearly every job in just a few years. We unpack how Super Bowl AI ads are reframing public perception, examine Waymo's use of DeepMind's Genie 3 world model to train autonomous vehicles on rare edge‑case scenarios, and also cover OpenAI's ad‑baked free ChatGPT tiers, HBR's findings on how AI expands workloads instead of lightening them, and new evidence that AI mislabels medical conditions in real‑world settings. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. Chapters: 0:00 - Start 0:01:59 - Anthropic Releases New Model That's Adept at Financial Research Anthropic releases Opus 4.6 with new ‘agent teams' 0:10:00 - Introducing GPT-5.3-Codex 0:14:42 - Something Big Is Happening 0:33:25 - Can these Super Bowl ads make Americans love AI? 0:36:52 - Dunkin' Donuts digitally de-aged ‘90s actors and I'm terrified 0:39:47 - AI.com bought by Crypto.com founder for $70mn in biggest-ever website name deal 0:42:11 - OpenAI begins testing ads in ChatGPT, draws early attention from advertisers and analysts 0:48:27 - Waymo Says Genie 3 Simulations Can Help Boost Robotaxi Rollout 0:53:30 - AI Doesn't Reduce Work—It Intensifies It 1:02:08 - As AI enters the operating room, reports arise of botched surgeries and misidentified body parts 1:04:48 - Meta is giving its AI slop feed an app of its own 1:06:53 - Google goes long with 100-year bond 1:09:18 - OpenAI Abandons ‘io' Branding for Its AI Hardware Learn more about your ad choices. Visit megaphone.fm/adchoices

Moneycontrol Podcast
5036: IT stocks selloff, Deepmind VP exclusive & UP's budget boost ahead of polls

Moneycontrol Podcast

Play Episode Listen Later Feb 12, 2026 4:24


In this edition of Moneycontrol Editor's Picks we put the spotlight on everything AI. From our analysis of the IT stocks selloff to an exclusive interview with Google Deepmind VP Pushmeet Kohli and the AI inflicted pressure on GCCs - find it all covered. Also, read our data story on how states rank in terms of inflation after the release of the new CPI series, how India may get a conditional zero reciprocal duty on apparel akin to Bangladesh and what revisions were made to the White House fact sheet on deal with India. All this and a lot more inside.

Decoding AI for Marketing
Why Rule-Based Marketing Is Breaking

Decoding AI for Marketing

Play Episode Listen Later Feb 10, 2026 39:58


Konrad Feldman, co-founder and CEO of Quantcast, explains the shift from rule-based “expert systems” to goal-driven, autonomous AI, the evolution of DSPs, the hidden limits of “AI-washed” platforms, and why measurement—not targeting—is the biggest bottleneck holding marketing back. Drawing on three decades of experience in neural networks, machine learning, and programmatic advertising, he shares where he thinks digital advertising is going next. For Further Reading:Konrad Feldman on AI Trends: https://marketech-apac.com/expert-up-close-quantcast-ceo-konrad-feldman-on-ai-trends-and-how-marketers-can-leverage-them-for-success/Why the CEO of Quantcast is Betting on Personalized AI: https://bigthink.com/business/how-ai-will-impact-marketing/More about Konrad: https://www.linkedin.com/in/konrad-feldman-555132/  Listen on your favorite podcast app: https://pod.link/1715735755

Daily Tech Headlines
EU: TikTok's “Addictive Design” Is Illegal Under DSA – DTH

Daily Tech Headlines

Play Episode Listen Later Feb 7, 2026


Waymo uses DeepMind's Genie 3 AI model to train on simulated scenes, DoJ probes possible Netflix anticompetitive tactics pre-WBD sale, AI.com domain sells for about $70 million. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. If you enjoyContinue reading "EU: TikTok’s “Addictive Design” Is Illegal Under DSA – DTH"

INSiDER - Dentro la Tecnologia
QuestIT: l'onboarding aziendale assistito dall'IA

INSiDER - Dentro la Tecnologia

Play Episode Listen Later Feb 7, 2026 35:24 Transcription Available


L'intelligenza artificiale sta trasformando il mondo del lavoro, dall'automazione dei processi alla gestione delle risorse umane. Ma come può l'IA rendere più semplice e intuitivo l'ingresso di un nuovo dipendente in azienda, semplificando la ricerca di informazioni e migliorando l'esperienza dei lavoratori fin dal primo giorno? In questa puntata esploriamo il concetto di AI-boarding, un approccio innovativo all'onboarding aziendale che sfrutta assistenti virtuali intelligenti per guidare i nuovi assunti attraverso i primi passi in azienda. Per scoprire come queste soluzioni vengono sviluppate e implementate, abbiamo invitato Ernesto Di Iorio, CEO di QuestIT, specializzata nello sviluppo di soluzioni avanzate basate su intelligenza artificiale.Nella sezione delle notizie parliamo di Project Genie 3, il nuovo modello di Google DeepMind in grado di creare mondi virtuali tridimensionali a partire da un semplice prompt e della normativa cinese che vieterà l'apertura solo elettronica delle automobili per motivi di sicurezza.--Indice--00:00 - Introduzione01:42 - Google DeepMind lancia Genie 3 (HDBlog.it, Luca Martinelli)02:55 - La Cina vieta l'apertura elettronica delle auto (DDay.it, Matteo Gallo)04:11 - QuestIT: l'onboarding aziendale assistito dall'IA (Ernesto Di Iorio, Davide Fasoli, Luca Martinelli)34:32 - Conclusione--Testo--Leggi la trascrizione: https://www.dentrolatecnologia.it/S8E6#testo--Contatti--• www.dentrolatecnologia.it• Instagram (@dentrolatecnologia)• Telegram (@dentrolatecnologia)• YouTube (@dentrolatecnologia)• redazione@dentrolatecnologia.it--Immagini--• Foto copertina: Rocketpixel su Freepik--Brani--• Ecstasy by Rabbit Theft• No Pressure by Tim Beeren & xChenda

Daily Tech News Show
Anthropic Releases Opus 4.6, Software Stocks Tumble Again - DTNS 5201

Daily Tech News Show

Play Episode Listen Later Feb 6, 2026 28:48


Waymo is training its fleet on edge case driving scenarios with DeepMind's Genie 3, and TikTok might have to change its infinite scroll behavior to address health concerns in the EU.Starring Jason Howell and Huyen Tue Dao.Show notes can be found here. Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The First Mechanistic Interpretability Frontier Lab — Myra Deng & Mark Bissell of Goodfire AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 6, 2026 68:01


From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword

Azeem Azhar's Exponential View
Mustafa Suleyman — AI is hacking our empathy circuits

Azeem Azhar's Exponential View

Play Episode Listen Later Feb 5, 2026 50:16


Welcome to Exponential View, the show where I explore how exponential technologies such as AI are reshaping our future. I've been studying AI and exponential technologies at the frontier for over ten years.Each week, I share some of my analysis or speak with an expert guest to make light of a particular topic.To keep up with the Exponential transition, subscribe to this channel or to my newsletter: https://www.exponentialview.co/-----A week before OpenClaw exploded, I recorded a prescient conversation with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind. We talked about what happens when AI starts to seem conscious – even if it isn't. Today, you get to hear our conversation.Mustafa has been sounding the alarm about what he calls “seemingly conscious AI” and the risk of collective AI psychosis for a long time. We discussed this idea of the “fourth class of being” – neither human, tool, nor nature – that AI is becoming and all it brings with it.Skip to the best bits:(03:38) Why consciousness means the ability to suffer(06:52) "Your empathy circuits are being hacked"(07:23) Consciousness as the basis of rights(10:47) A fourth class of being(13:41) Why market forces push toward seemingly conscious AI(20:56) What AI should never be allowed to say(25:06) The proliferation problem with open-source chatbots(29:09) Why we need well-paid civil servants(30:17) Where should we draw the line with AI?(37:48) The counterintuitive case for going faster(42:00) The vibe coding dopamine hit(47:09) Social intelligence as the next AI frontier(48:50) The case for humanist super intelligence-----Where to find Mustafa:- X (Twitter): https://x.com/mustafasuleyman- LinkedIn: https://www.linkedin.com/in/mustafa-suleyman/- Personal Website: https://mustafa-suleyman.ai/Where to find me:- Substack: https://www.exponentialview.co/- Website: https://www.azeemazhar.com/- LinkedIn: https://www.linkedin.com/in/azhar- Twitter/X: https://x.com/azeemProduced by supermix.io and EPIIPLUS1 Ltd. Production and research: Chantal Smith and Marija Gavrilov. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

IT Privacy and Security Weekly update.
Dark Matter and the IT Privacy and Security Weekly Update for the week ending February 3rd., 2026

IT Privacy and Security Weekly update.

Play Episode Listen Later Feb 4, 2026 19:57


EP 277In this week's dark matter:Privacy-first users send a clear message to DuckDuckGo.  AI-free search is here to stay for most of its community.A cutting-edge AI from AISLE exposed deep-seated vulnerabilities in OpenSSL, exponentially speeding the pace of cybersecurity discovery.A security breach at eScan transformed trusted antivirus software into an unexpected cyber weapon.An internal probe suggests a cyber intrusion may have prematurely exposed last year's Nobel Peace Prize laureate.A U.S. jury found former Google engineer Linwei Ding guilty of funneling AI trade secrets to Chinese tech companies.Newly surfaced records reveal U.S. investigators examined claims that WhatsApp's encryption might not be as airtight as advertised.Apple's new location “fuzzing” feature gives users the power to stay connected, without being precisely tracked.A privacy lapse in a talking AI toy exposed thousands of private conversations between children and their plush companions.Google unleashes new AI to investigate DNA's ‘dark matter'.  DeepMind's latest creation, AlphaGenome, is shining light on the 98% of DNA that science once found inscrutable.Come on, let's go unravel some genomes.Find the full transcript to this podcast here.

AI For Pharma Growth
E203: Building Programmable Biologics from Scratch: How DenovAI's AI is Revolutionizing Drug Discovery

AI For Pharma Growth

Play Episode Listen Later Feb 4, 2026 34:35


Designing proteins that have never existed in nature is no longer sci-fi — it's becoming a real drug discovery strategy. In this episode, Kashif Sadiq, Founder & CEO of DenovAI Biotech, explains how AI is powering a shift from searching for biologic binders to intentionally designing new proteins from scratch.Kashif shares his journey from studying physics at University of Cambridge into computational biophysics, and how breakthroughs like AlphaFold from DeepMind helped unlock the next frontier: de novo protein design. Instead of hoping evolution has already produced a usable molecule, Kashif describes how modern AI can engineer bespoke proteins for specific functions, including challenging targets where traditional approaches come up short.The conversation dives into the sheer scale of “protein space” and why evolution has only explored a tiny fraction of what's possible. Kashif outlines how this opens the door to targeting diseases and biological mechanisms that have historically been considered undruggable, especially where flat protein interfaces or complex signalling pathways have made small molecules ineffective.Finally, Kashif explains why combining generative AI with physics-based methods is essential to reduce false positives, improve real-world binding performance, and enable “one-shot design” — where discovery and optimisation become a single integrated process. He also shares what keeps him up at night: clinical trial attrition — and why designing better earlier may be the key to improving success later.Topics CoveredDe novo protein design vs traditional biologics discoveryWhy evolution explored only a tiny fraction of protein space“Programmable biologics” and intentional molecular designAlpha Design and designing proteins from the inverse problemAntibodies, nanobodies, and therapeutic protein engineeringCombining generative AI with physics-based validationReducing false positives in protein binding predictions“One-shot design” and compressing discovery timelinesUndruggable targets, flat interfaces, and intracellular signallingClinical trial attrition and what's missing at the preclinical stageWhen the first de novo-designed therapeutic could enter trialsAbout the PodcastAI for Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr Andree Bates, created to help pharma, biotech and healthcare organisations understand how AI-based technologies can save time, grow brands, and improve company results.This show blends deep sector experience with practical, no-fluff conversations that demystify AI for biopharma execs — from start-up biotech right through to Big Pharma. Each episode features experts building AI-powered tools that are driving real-world results across discovery, R&D, clinical trials, market access, medical affairs, regulatory, insights, sales, marketing, and more.Dr. Andree Bates LinkedIn | Facebook | X

Elon Musk Pod
AI's Big Lie. 50,000 People Were Told AI Took Their Jobs

Elon Musk Pod

Play Episode Listen Later Feb 2, 2026 7:38


Companies blamed AI for over 50,000 layoffs last year, but a new report suggests many of them don't have the AI to replace those workers. Meanwhile, Google launches a model that actually tanks gaming stocks, and DeepMind's CEO tells students to skip internships and learn AI tools instead. What's real and what's hype?

AIA Podcast
OpenClaw (ClawdBot) ВЗРЫВАЕТ ИНТЕРНЕТ, первая конституция для ИИ и датацентр на 1 ГВт / ПНВ #402

AIA Podcast

Play Episode Listen Later Feb 1, 2026 175:24


Сегодня разбираем инвестиции OpenAI в ультразвуковые нейроинтерфейсы и новую «Конституцию» Anthropic для Claude , смотрим на мега-стройку Илона Маска Colossus 2 мощностью в два Сан-Франциско и вникаем в кризис высшего образования на примере найма 23-летних самоучек в команде Sora. Исследуем научный редактор Prism, шпионский потенциал Open Claw и одноразовые кольца-диктофоны Pebble. В конце обсудим наезд беспилотника Waymo на ребенка , интеграцию Grok в Теслы и этику общения с ИИ через «матюки». А ещё, прощаемся с Виктором и приветствуем Викторию!

Liberty's Highlights
Trillion Dollar Club with Mostly Borrowed Ideas (MBI): Nvidia, Apple, Google, Microsoft, Amazon, TSMC, Meta, Broadcom, and Tesla

Liberty's Highlights

Play Episode Listen Later Jan 30, 2026 115:37


Beyond The Valley
Can We Control AI? DeepMind's Plan for Responsible AI

Beyond The Valley

Play Episode Listen Later Jan 29, 2026 44:25


Google DeepMind's Dawn Bloxwich and Tom Lue join "The Tech Download" to explore one of the biggest questions in technology today: Can we control AI? They break down how DeepMind is building safeguards, stress‑testing its models and working with global regulators to ensure advanced AI develops responsibly.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Teaser For AI Daily News Rundown January 29 2026: DeepMind's AlphaGenome, The Amazon Layoffs, & China's Moonshot K2.5

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Jan 29, 2026 1:28


DeepMind's AlphaGenome, The Amazon Layoffs, & China's Moonshot K2.5Full Audio including Detailed Analysis at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown/id1684415169?i=1000747119039

Podcast Notes Playlist: Latest Episodes
Story Of The Most Important Founder You've Never Heard Of

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Jan 25, 2026


My First Million: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Get our Resource Vault - a curated collection of pro-level business resources (tools, guides, databases): https://clickhubspot.com/jbg Episode 786:  Sam Parr ( ⁠https://x.com/theSamParr⁠ ) and Shaan Puri ( ⁠https://x.com/ShaanVP⁠ ) tell the story Demis Hassabis ( https://x.com/demishassabis ) and the creation of Deepmind.  Show Notes: (0:00) Demis the Menace (22:05) The only resource you need is resourcefulness (2457) Move 37 (29:38) The olympics of protein folding (4639) We are the gorillas — Links: • The Thinking Game - https://www.youtube.com/watch?v=d95J8yzvjbQ  • Why We Do What We Do - https://www.youtube.com/watch?v=BwFOwyoH-3g  • Fierce Nerds - https://paulgraham.com/fn.html  • Isomorphic Labs - https://www.isomorphiclabs.com/  • If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com/  — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com  • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC • I run all my newsletters on Beehiiv and you should too + we're giving away $10k to our favorite newsletter, check it out: beehiiv.com/mfm-challenge — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano /

Beyond The Valley
Will AI Make or Break Education?

Beyond The Valley

Play Episode Listen Later Jan 22, 2026 32:41


AI is changing the way we learn and work. Google DeepMind COO Lila Ibrahim joins “The Tech Download” to explain the opportunities, risks and why teaching responsible AI use starts now.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Moneycontrol Podcast
5003: TCS woos OpenAI, DeepMind's Hassabis on Indian AI & make way for Apple Pay | MC Editor's Picks

Moneycontrol Podcast

Play Episode Listen Later Jan 21, 2026 4:47


Google DeepMind chief executive Demis Hassabis speaks to Moneycontrol exclusively about his thoughts on India's foundational AI models. US President  Trump's Davos address keeps markets in the red, Indian markets show prolonged weakness. In other news we track the India-EU trade deal, Deepinder Goyal's resignation and Apple's next move in India. Also find an exclusive interview with Jahangir Aziz, Head of Emerging Markets at JPMorgan as he weighs in on tariffs and trade policy. Tune in!

My First Million
Story Of The Most Important Founder You've Never Heard Of

My First Million

Play Episode Listen Later Jan 19, 2026 59:36


Get our Resource Vault - a curated collection of pro-level business resources (tools, guides, databases): https://clickhubspot.com/jbg Episode 786:  Sam Parr ( ⁠https://x.com/theSamParr⁠ ) and Shaan Puri ( ⁠https://x.com/ShaanVP⁠ ) tell the story Demis Hassabis ( https://x.com/demishassabis ) and the creation of Deepmind.  Show Notes: (0:00) Demis the Menace (22:05) The only resource you need is resourcefulness (2457) Move 37 (29:38) The olympics of protein folding (4639) We are the gorillas — Links: • The Thinking Game - https://www.youtube.com/watch?v=d95J8yzvjbQ  • Why We Do What We Do - https://www.youtube.com/watch?v=BwFOwyoH-3g  • Fierce Nerds - https://paulgraham.com/fn.html  • Isomorphic Labs - https://www.isomorphiclabs.com/  • If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com/  — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com  • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC • I run all my newsletters on Beehiiv and you should too + we're giving away $10k to our favorite newsletter, check it out: beehiiv.com/mfm-challenge — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano /

Cloud Security Podcast by Google
EP259 Why DeepMind Built a Security LLM Sec-Gemini and How It Beats the Generalists

Cloud Security Podcast by Google

Play Episode Listen Later Jan 19, 2026 33:36


Guest: Elie Burstein,  Distinguished Scientist, Google Deepmind Topics:  What is Sec-Gemini, why are we building it? How does DeepMind decide when to create something like Sec-Gemini?  What motivates a decision to focus on something like this vs anything else we might build as a dedicated set of regular Gemini capabilities?  What is Sec-Gemini good at? How do we know it's good at those things?  Where and how is it better than a general LLM? Are we using Sec-Gemini internally?  Resources: Video version EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side Big Sleep, CodeMender blogs

矽谷輕鬆談 Just Kidding Tech
S2E41 從西洋棋神童到 DeepMind:Demis 追尋 AGI 的 20 年長征

矽谷輕鬆談 Just Kidding Tech

Play Episode Listen Later Jan 18, 2026 26:59


成為這個頻道的會員並獲得福利:https://www.youtube.com/channel/UCJIPFjZSCWR15_jxBaK2fQQ/join前陣子我在旅行途中看了一部剛出的紀錄片《The Thinking Game》,看完之後只能用「驚為天人」來形容。這部片記錄了 DeepMind 創辦人 Demis Hassabis 追尋通用人工智慧(AGI)的過程,看完當下我就決定:一定要做一集影片好好跟大家聊聊這個人,以及這家改變世界的公司。你很難想像,現在我們熟悉的 AlphaGo、AlphaFold 甚至是 Gemini,其實都源自於一個 13 歲西洋棋神童的頓悟。當年 Demis 在一場長達 10 小時的對弈後,意識到人類大腦如果只用來玩零和遊戲太過浪費。於是他從遊戲開發轉向神經科學,最後創立 DeepMind,並向 Peter Thiel 和 Elon Musk 提出了一個瘋狂的計畫:「我們要打造一個 AI 界的阿波羅計畫,第一步解開智慧,第二步用它解決所有問題。」這集影片不只是紀錄片的補充說明,我整理了 Demis 過去 20 年的長征故事,包括 Google 與 Facebook 當年的搶人大戰內幕、AlphaFold 如何破解困擾科學界 50 年的難題,以及現在 Google DeepMind 如何在逆境中反擊。這不只是一個關於開發軟體或遊戲的故事,更是一段人類試圖解開智慧謎團、破解生命密碼的旅程。希望能透過這集,帶大家看懂這場人類史上最宏大的科學實驗。本集精彩亮點:♟️ 西洋棋神童的頓悟: 為什麼一場 10 小時的平局,讓他決定放棄下棋轉做 AI?

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Today on the AI Daily Brief, why AI leadership is shifting decisively to the CEO—and why that shift is happening now as AI moves from experimentation to core enterprise strategy. Drawing on new survey data, the episode explores what happens when AI becomes recession-proof, ROI timelines pull forward, and agentic systems start reshaping organizations at scale. Before that, in the headlines: Replit pushes vibe coding all the way to mobile app stores, Higgsfield rockets to unicorn status on explosive growth, Thinking Machines Labs faces a wave of high-profile departures, and DeepMind's Demis Hassabis warns that Chinese AI models are now only months behind the frontier. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Opal - The agent orchestration platform build for marketers - ⁠⁠⁠⁠⁠⁠⁠https://www.optimizely.com/theaidailybrief⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Beyond The Valley
Demis Hassabis: The Man Behind Google's AI machine

Beyond The Valley

Play Episode Listen Later Jan 15, 2026 52:40


Hosted by Arjun Kharpal and Steve Kovach, CNBC's “The Tech Download” cuts through the noise to unpack the tech stories that matter most for your money. In the debut episode, Google DeepMind CEO Demis Hassabis reveals how the leading AI research lab is driving breakthroughs, as well as what the race to artificial general intelligence means for science, business and society.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

ACM ByteCast
Andrew Barto and Richard Sutton - Episode 80

ACM ByteCast

Play Episode Listen Later Jan 14, 2026 42:39


In this episode of ACM ByteCast, Rashmi Mohan hosts 2024 ACM A.M. Turing Andrew laureates Andrew Barto and Richard Sutton. They received the Turing Award for developing the conceptual and algorithmic foundations of reinforcement learning, a computational framework that underpins modern AI systems such as AlphaGo and ChatGPT. Barto is Professor Emeritus in the Department of Information and Computer Sciences at the University of Massachusetts, Amherst. His honors include the UMass Neurosciences Lifetime Achievement Award, the IJCAI Award for Research Excellence, and the IEEE Neural Network Society Pioneer Award. He is a Fellow of IEEE and AAAS. Sutton is a Professor in Computing Science at the University of Alberta, a Research Scientist at Keen Technologies (an artificial general intelligence company) and Chief Scientific Advisor of the Alberta Machine Intelligence Institute (Amii). In the past he was a Distinguished Research Scientist at Deep Mind and served as a Principal Technical Staff Member in the AI Department at the AT&T Shannon Laboratory. His honors include the IJCAI Research Excellence Award, a Lifetime Achievement Award from the Canadian Artificial Intelligence Association, and an Outstanding Achievement in Research Award from the University of Massachusetts at Amherst. Sutton is a Fellow of the Royal Society of London, AAAI, and the Royal Society of Canada. In the interview, Andrew and Richard reflect on their long collaboration together and the personal and intellectual paths that led both researchers into CS and reinforcement learning (RL), a field that was once largely neglected. They touch on interdisciplinary explorations across psychology (animal learning), control theory, operations research, cybernetics, and how these inspired their computational models. They also explain some of their key contributions to RL, such as temporal difference (TD) learning and how their ideas were validated biologically with observations of dopamine neurons. Barto and Sutton trace their early research to later systems such as TD-Gammon, Q-learning, and AlphaGo and consider the broader relationship between humans and reinforcement learning-based AI, and how theoretical explorations have evolved into impactful applications in games, robotics, and beyond.

Unchained
DEX in the City: Why Prediction Market 'Insider Trading' Isn't Illegal — Yet

Unchained

Play Episode Listen Later Jan 10, 2026 44:29


Thank you to our sponsor, Mantle! Canton's in bed with Nasdaq, a Google DeepMind's paper talks up the role of blockchain in an agentic economy and an alleged insider cashes in on Maduro's capture. In this DEX in the City episode, hosts Katherine Kirkpatrick Bos, Jessi Brooks and Vy Le dive into the implications of Canton's Nasdaq deal, why DeepMind's study matters for crypto and the legality of insider trading on prediction markets. Vy highlights what Canton's Nasdaq deal signals about the priorities of institutions adopting blockchain technology. Katherine and Jessi engage in what happens when the machines take over. Plus, should federal officials be banned from using prediction markets? Hosts: Jessi Brooks Katherine Kirkpatrick Bos TuongVy Le Links: Bitcoin Rallies to $93,000 After U.S. Attack on Venezuela How the x402 Standard Is Enabling AI Agents to Pay Each Other Why the Black Friday Whale's $192 Million Crypto Trade Was Legal DEX in the City: Insider Trading and Crypto: What the Law Actually Says Google DeepMind's agentic economy paper Pawthereum's website A copy of Rep. Ritchie's bill Learn more about your ad choices. Visit megaphone.fm/adchoices

The John Batchelor Show
S8 Ep291: DEEPMIND AND THE GOOGLE ACQUISITION Colleague Gary Rivlin. Mustafa Suleyman and Demis Hassabis founding DeepMind to master games, their sale to Google for $650 million, and the culture clash that followed. NUMBER 12

The John Batchelor Show

Play Episode Listen Later Jan 9, 2026 8:48


DEEPMIND AND THE GOOGLE ACQUISITION Colleague Gary Rivlin. Mustafa Suleyman and Demis Hassabisfounding DeepMind to master games, their sale to Google for $650 million, and the culture clash that followed. NUMBER 121952

HARDtalk
Mustafa Suleyman, Artificial Intelligence pioneer: people should be healthily afraid of AI

HARDtalk

Play Episode Listen Later Jan 9, 2026 22:59


'As somebody who's deeply techno-optimistic, I invite people to be also healthily afraid and sceptical'BBC presenter Amol Rajan speaks to the British artificial intelligence entrepreneur Mustafa Suleyman, Chief Executive of Microsoft AI.He believes in the enormous potential of AI to be a force for good in the world, changing how we live and work for the better. He's committed to developing a humanist superintelligence, one that always works to serve people and never vice versa. But he remains clear about what he sees as the risks, issuing a warning that without the right ethical safeguards, AI could grow powerful enough to overwhelm humanity.The son of a London taxi-driver and a nurse, he dropped out of Oxford University and by his mid-twenties had co-founded DeepMind, the pioneering artificial intelligence research lab. By the time it was sold to Google four years later in 2014, it was worth a reported $400 million.Thank you to the Today team for its help in making this programme. The Interview brings you conversations with people shaping our world, from all over the world. The best interviews from the BBC. You can listen on the BBC World Service on Mondays, Wednesdays and Fridays at 0800 GMT. Or you can listen to The Interview as a podcast, out three times a week on BBC Sounds or wherever you get your podcasts.Presenter: Amol Rajan Producers: Kate Collins, Ollie Stone-Lee and Lucy Sheppard Editor: Justine LangGet in touch with us on email TheInterview@bbc.co.uk and use the hashtag #TheInterviewBBC on social media.

Class Disrupted
DeepMind's Learnings in Developing an AI Tutor

Class Disrupted

Play Episode Listen Later Jan 9, 2026


Irina Jurenka, the research lead for AI in education at Google DeepMind, joined Michael and Diane to discuss the development and impact of AI tutors in learning. The conversation delved into how generative AI, specifically the Gemini model, is being shaped to support pedagogical principles and foster more effective learning experiences. Irina shares insights fromContinue reading "DeepMind's Learnings in Developing an AI Tutor"

Startup Inside Stories
IA, Producto y Negocio: las Decisiones Que Marcan el Futuro | Primera Tertulia del Año

Startup Inside Stories

Play Episode Listen Later Jan 8, 2026 84:24


La tertulia empieza con una conversación ligera de “vuelta al año” hablando del frío en Barcelona y de viajes recientes (Suiza y la Riviera francesa), con alguna anécdota sobre Mónaco/Montecarlo y su aura de lujo y rarezas. Desde ahí pasamos a temas más actuales de internet, criticando el uso masivo y bastante desagradable de la IA en X/Twitter para pedir ediciones de fotos, y comentando cómo el propio formato y algoritmo de la red está empujando contenido y debates cada vez más tóxicos. Luego aterrizan en IA “de empresa”: la idea de que el valor no está solo en el modelo, sino en los datos, la distribución y en saber desplegar agentes y automatizaciones de verdad, hasta el punto de que empiezan a aparecer perfiles internos dedicados a construir workflows con agentes.En paralelo abrimos el debate más de industria sobre si el enfoque actual de LLMs está tocando techo, mencionan el choque de visiones en el mundillo (LeCun vs Alexander Wang) y la apuesta por “modelos del mundo” como siguiente salto. Cierran mezclando inversión y hardware (la carrera por la inferencia, el dinero alrededor de xAI/X), con un cierre claro: el gran límite de todo esto puede ser la energía, y por eso les parece clave cualquier avance serio en baterías de estado sólido.

Conectando Puntos
Episodio 249: La jaula de hierro algorítmica

Conectando Puntos

Play Episode Listen Later Jan 8, 2026 40:38


Tras un largo silencio que parece haber suspendido el tiempo mismo, regresamos para constatar que, aunque nosotros nos detuvimos, la inercia del mundo y sus automatismos no lo hicieron. ¿Es posible que estemos habitando ya el interior de una estructura invisible que prioriza la eficiencia sobre la libertad? ¿Hemos cruzado ya el punto de no retorno donde los algoritmos no solo nos asisten, sino que nos gobiernan sin darnos una explicación? Conexiones imposibles y un poco de filosofIA para esta vuelta a los escenarios que tanta ilusión nos hacía. Recordemos que todo acaba y todo empieza en el Episodio 248: El punto de no retorno algorítmico: El antecedente directo donde se plantea el umbral en el que perdemos el control sobre sistemas esenciales. Estos son los contenidos para seguir conectando puntos: Bulletin of the Atomic Scientists – Doomsday Clock: El Reloj del Apocalipsis no es una mera herramienta simbólica; es un recordatorio que hemos pasado por alto durante demasiado tiempo. Desde 1947, científicos de primer nivel evalúan anualmente cuán cerca estamos de la medianoche, esa destrucción catastrófica que representaba inicialmente solo amenazas nucleares. Lo que nos fascina del episodio es cómo este reloj ha evolucionado para incluir amenazas que los abuelos de estos científicos jamás contemplaron: inteligencia artificial, cambios climáticos, biología disruptiva. En 2025, por primera vez en 78 años, el reloj se posicionó a 89 segundos de la medianoche. Un único segundo de diferencia respecto a 2024, pero un gesto que dice todo: la IA no es una amenaza futura, está aquí, ahora, acelerando riesgos que ya parecían irremontables. AESIA – Agencia Española de Supervisión de la Inteligencia Artificial: España ha impulsado un organismo dedicado exclusivamente a supervisar la IA. La AESIA es una institución con poder real para exigir explicabilidad, para inspeccionar sistemas de riesgo alto, para establecer que los algoritmos no pueden ser cajas negras perpetuas. Comenzó operaciones en 2025 cuando Europa aprobaba su directiva sobre IA. Lo que el episodio subraya es algo crucial: la regulación llega tarde. Mientras AESIA inspecciona sistemas nuevos, más de mil algoritmos médicos antiguos siguen operando sin cumplir esos requisitos de transparencia. Civio – Sentencia BOSCO y Transparencia Algorítmica: Una organización de vigilancia ciudadana llevó al Tribunal Supremo español un caso que iba a cambiar algo fundamental: el acceso al código fuente de BOSCO, el algoritmo que decide quién recibe ayuda eléctrica y quién no. Durante años, el Gobierno argumentó seguridad nacional, propiedad intelectual, secretos comerciales. El Supremo ha dicho que no. La sentencia de 2025 estableció jurisprudencia: la transparencia algorítmica es un derecho democrático. Los algoritmos que condicionan derechos sociales no pueden ser opacos. Por primera vez, un tribunal de alto nivel reconoce que vivimos en una «democracia digital» donde los ciudadanos tienen derecho a fiscalizar, a conocer, a entender cómo funciona la máquina que decide sobre sus vidas. BOSCO era apenas un ejemplo. La sentencia abre la puerta a exigencias de transparencia sobre cualquier sistema que use la administración pública para decisiones automatizadas. Es pequeño, increíblemente importante, y probablemente insuficiente. Reshuffle: Who Wins When AI Restacks the Knowledge Economy – Sangeet Paul Choudary: Este libro es exactamente lo que necesitábamos leer antes de grabar este episodio. Choudary no habla de cómo la IA automatiza tareas; habla de cómo la IA remodela el orden completo de cómo trabajamos, cómo nos coordinamos, cómo creamos valor. «Reshuffle» no es un catálogo de miedos; es un análisis de cómo nuevas formas de coordinación sin control centralizado están emergiendo. El libro conecta con lo que discutimos sobre la opacidad: no es solo que los algoritmos sean opacos, es que están reorganizando estructuras organizacionales enteras. Choudary habla de empresas que ya no saben quién es responsable de qué porque las máquinas coordinan sin necesidad de consenso humano. Es Max Weber acelerado a velocidad de red neuronal. The Thinking Game – Documental sobre Demis Hassabis y DeepMind: Un documental que filma la persecución de una obsesión: Demis Hassabis pasó su vida entera buscando resolver la inteligencia. The Thinking Game, producido por el equipo que creó AlphaGo, muestra cinco años dentro de DeepMind, los momentos cruciales en que la IA saltó de juegos a resolver problemas biológicos reales con AlphaFold. Lo que duele ver aquí es que Hassabis resolvió un problema de 50 años en biología y lo open-sourceó. La pregunta incómoda es: ¿cuántos otros Hassabis están dentro de laboratorios corporativos con incentivos inversos, guardando secretos? The Thinking Game es un retrato de lo que podría ser si el impulso científico ganara sobre el extractivo. Recomendamos verlo antes de cualquier conversación sobre dónde está realmente el avance en IA. Las horas del caos: La DANA. Crónica de una tragedia: Sergi Pitarch reconstruye hora a hora el 29 de octubre de 2024, el día en que la DANA arrasó Valencia. Lo que hace diferente a este libro es que no solo cuenta lo que sucedió; documenta lo que no se hizo, quién fue responsable de silenciar advertencias, qué decisiones fueron tomadas en salas oscuras mientras miles quedaban atrapados. Es una crónica periodística larga en el estilo norteamericano de investigación profunda. Lo conectamos al episodio porque la tragedia de Valencia es un espejo: sistemas con algoritmos que debían predecir, equipos de emergencia que debían comunicar, protocolos que debían activarse. Pero hubo silencios, opacidades, dilución de responsabilidad. Exactamente lo que sucede cuando los algoritmos fallan sin que nadie sepa quién paga el precio. Pitarch escribe para que las víctimas no caigan en el olvido y para que la siguiente tragedia no se repita con la misma negligencia. Anatomía de un instante: Serie basada en el libro de Javier Cercas, que examina el 23-F español, el golpe militar de 1981, pero lo hace como psicólogo de la historia: ¿qué es lo que convierte a un hombre en héroe en un instante crucial? Lo traemos aquí porque el libro trata sobre cómo nuestros sistemas, nuestras instituciones, nuestras estructuras de poder están sostenidas por momentos impredecibles, por acciones individuales que los algoritmos no pueden modelar. La IA promete predecibilidad, certeza, orden. Cercas nos recuerda que la historia es una disciplina de lo impredecible, que los instantes que nos definen no salen de una ecuación. Una nota final: Gracias por estar aquí. Un año después, sin Delorean, sin viaje temporal, pero con la certeza de que mientras buscábamos retroceder, el mundo siguió avanzando. Eso era el verdadero experimento: comprobar si podíamos volver a conectar puntos después de doce meses de que los algoritmos siguieran escribiendo el guión. La respuesta es sí. Pero la pregunta más incómoda permanece: ¿sabemos realmente dónde estamos en esa jaula de hierro? ¿O solo acabamos de darnos cuenta de que hay paredes? Para contactar con nosotros, podéis utilizar nuestra cuenta de twitter (@conectantes), Instagram (conectandopuntos) o el formulario de contacto de nuestra web conectandopuntos.es. Nos podéis escuchar en iVoox, en iTunes o en Spotify (busca por nuestro nombre, es fácil). Créditos del programa Intro: Stefan Kanterberg ‘By by baby‘ (licencia CC Atribución). Cierre: Stefan Kanterberg ‘Guitalele's Happy Place‘ (licencia CC Atribución). Foto: Creada con IA ¿Quieres patrocinar este podcast? Puedes hacerlo a través de este enlace La entrada Episodio 249: La jaula de hierro algorítmica se publicó primero en Conectando Puntos.

The Cloud Pod
337: AWS Discovers Prices Can Go Both Ways, Raises GPU Costs 15 Percent

The Cloud Pod

Play Episode Listen Later Jan 6, 2026 52:01


 Welcome to episode 337 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and Ryan have hit the recording studio to bring you all the latest in cloud and AI news, from acquisitions and price hikes to new tools that Ryan somehow loves but also hates? We don't understand either… but let's get started!  Titles we almost went with this week Prompt Engineering Our Way Into Trouble The Demo Worked Yesterday, We Swear It Scales Horizontally, Trust Us Responsible AI But Terrible Copy (Marketing Edition) General News  00:58 Watch ‘The Thinking Game' documentary for free on YouTube Google DeepMind is releasing the “The Thinking Game” documentary for free on YouTube starting November 25, marking the fifth anniversary of AlphaFold.  The feature-length film provides behind-the-scenes access to the AI lab and documents the team’s work toward artificial general intelligence over five years. The documentary captures the moment when the AlphaFold team learned they had solved the 50-year protein folding problem in biology, a scientific achievement that recently earned Demis Hassabis and John Jumper the Nobel Prize in Chemistry.  This represents one of the most significant practical applications of deep learning to fundamental scientific research. The film was produced by the same award-winning team that created the AlphaGo documentary, which chronicled DeepMind’s earlier achievement in mastering the game of Go. For cloud and AI practitioners, this offers insight into how Google DeepMind approaches complex AI research problems and the development process behind their models. While this is primarily a documentary release rather than a technical product announcement, it provides context for understanding Google’s broader AI strategy and the research foundation underlying its cloud AI services. The AlphaFold model itself is available through Google Cloud for protein structure prediction workloads. 01:54 Justin – “If you're not into technology, don't care about any of that, and don't care about AI and how they built all the AI models that are now powering the world of LLMs we have, you will not like this documentary.”  04:22 ServiceNow to buy Armis in $7.7 billion security deal • The Register ServiceNow is acquiring Armis for $7.75 billion to integrate real-time security intelligence with its Configuration Management Database, allowing customers to identify vulnerabilities across IT, OT, and medical devices and remediate them through automated workflows. 

TechCheck
Google's Boston Dynamics partnership, and Tesla's AV, Robotics challengers 1/6/26

TechCheck

Play Episode Listen Later Jan 6, 2026 6:18


Google's Deepmind unit announcing a partnership with Boston Dynamics at CES where robotics stole the show. We dig into the biggest announcements and what they mean in the race for physical AI. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The John Batchelor Show
S8 Ep270: FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 10:30


FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and Peter Thiel, the organization aimed to be a counterweight to Google's DeepMind, which was driven by profit. The team relied on massive computing power provided by GPUs—originally designed for video games—to train neural networks, recruiting top talent like Sutskever to lead their scientific efforts. NUMBER 13 1955

The John Batchelor Show
S8 Ep271: SHOW 12-2-2026 THE SHOW BEGIJS WITH DOUBTS ABOUT AI -- a useful invetion that can match the excitement of the first decades of Photography. November 1955 NADAR'S BALLOON AND THE BIRTH OF PHOTOGRAPHY Colleague Anika Burgess, Flashes of Brilli

The John Batchelor Show

Play Episode Listen Later Jan 3, 2026 6:22


SHOW 12-2-2026 THE SHOW BEGIJS WITH DOUBTS ABOUT AI --  a useful invetion that can match the excitement of the first decades of Photography. November 1955 NADAR'S BALLOON AND THE BIRTH OF PHOTOGRAPHY Colleague Anika Burgess, Flashes of Brilliance. In 1863, the photographer Nadar undertook a perilous ascent in a giant balloon to fund experiments for heavier-than-air flight, illustrating the adventurous spirit required of early photographers. This era began with Daguerre's 1839 introduction of the daguerreotype, a process involving highly dangerous chemicals like mercury and iodine to create unique, mirror-like images on copper plates. Pioneers risked their lives using explosive materials to capture reality with unprecedented clarity and permanence. NUMBER 1 PHOTOGRAPHING THE MOON AND SEA Colleague Anika Burgess, Flashes of Brilliance. Early photography expanded scientific understanding, allowing humanity to visualize the inaccessible. James Nasmyth produced realistic images of the moon by photographing plaster models based on telescope observations, aiming to prove its volcanic nature. Simultaneously, Louis Boutan spent a decade perfecting underwater photography, capturing divers in hard-hat helmets. These efforts demonstrated that photography could be a tool for scientific analysis and discovery, revealing details of the natural world previously hidden from the human eye. NUMBER 2 SOCIAL JUSTICE AND NATURE CONSERVATION Colleague Anika Burgess, Flashes of Brilliance. Photography became a powerful agent for social and environmental change. Jacob Riis utilized dangerous flash powder to document the squalid conditions of Manhattan tenements, exposing poverty to the public in How the Other Half Lives. While his methods raised consent issues, they illuminated grim realities. Conversely, Carleton Watkins hauled massive equipment into the wilderness to photograph Yosemite; his majestic images influenced legislation signed by Lincoln to protect the land, proving photography's political impact. NUMBER 3 X-RAYS, SURVEILLANCE, AND MOTION Colleague Anika Burgess, Flashes of Brilliance. The discovery of X-rays in 1895 sparked a "new photography" craze, though the radiation caused severe injuries to early practitioners and subjects. Photography also entered the realm of surveillance; British authorities used hidden cameras to photograph suffragettes, while doctors documented asylum patients without consent. Finally, Eadweard Muybridge's experiments captured horses in motion, settling debates about locomotion and laying the technical groundwork for the future development of motion pictures. NUMBER 4 THE AWAKENING OF CHINA'S ECONOMY Colleague Anne Stevenson-Yang, Wild Ride. Returning to China in 1994, the author witnessed a transformation from the destitute, Maoist uniformity of 1985 to a budding export economy. In the earlier era, workers slept on desks and lacked basic goods, but Deng Xiaoping's realization that the state needed hard currency prompted reforms. Deng established Special Economic Zones like Shenzhen to generate foreign capital while attempting to isolate the population from foreign influence, marking the start of China's export boom. NUMBER 5 RED CAPITALISTS AND SMUGGLERS Colleague Anne Stevenson-Yang, Wild Ride. Following the 1989 Tiananmen crackdown, China reopened to investment in 1992, giving rise to "red capitalists"—often the children of party officials who traded political access for equity. As the central government lost control over local corruption and smuggling rings, it launched "Golden Projects" to digitize and centralize authority over customs and taxes. To avert a banking collapse in 1998, the state created asset management companies to absorb bad loans, effectively rolling over massive debt. NUMBER 6 GHOST CITIES AND THE STIMULUS TRAP Colleague Anne Stevenson-Yang, Wild Ride. China's growth model shifted toward massive infrastructure spending, resulting in "ghost cities" and replica Western towns built to inflate GDP rather than house people. This "Potemkin culture" peaked during the 2008 Olympics, where facades were painted to impress foreigners. To counter the global financial crisis, Beijing flooded the economy with loans, fueling a real estate bubble that consumed more cement in three years than the US did in a century, creating unsustainable debt. NUMBER 7 STAGNATION UNDER SURVEILLANCE Colleague Anne Stevenson-Yang, Wild Ride. The severe lockdowns of the COVID-19 pandemic shattered consumer confidence, leaving citizens insecure and unwilling to spend, which stalled economic recovery. Local governments, cut off from credit and burdened by debt, struggle to provide basic services. Faced with economic stagnation, Xi Jinping has rejected market liberalization in favor of increased surveillance and control, prioritizing regime security over resolving the structural debt crisis or restoring the dynamism of previous decades. NUMBER 8 FAMINE AND FLIGHT TO FREEDOM Colleague Mark Clifford, The Troublemaker. Jimmy Lai was born into a wealthy family that lost everything to the Communist revolution, forcing his father to flee to Hong Kong while his mother endured labor camps. Left behind, Lai survived as a child laborer during a devastating famine where he was perpetually hungry. A chance encounter with a traveler who gave him a chocolate bar inspired him to escape to Hong Kong, the "land of chocolate," stowing away on a boat at age twelve. NUMBER 9 THE FACTORY GUY Colleague Mark Clifford, The Troublemaker. By 1975, Jimmy Lai had risen from a child laborer to a factory owner, purchasing a bankrupt garment facility using stock market profits. Despite being a primary school dropout who learned English from a dictionary, Lai succeeded through relentless work and charm. He capitalized on the boom in American retail sourcing, winning orders from Kmart by producing samples overnight and eventually building Comitex into a leading sweater manufacturer, embodying the Hong Kong dream. NUMBER 10 CONSCIENCE AND CONVERSION Colleague Mark Clifford, The Troublemaker. The 1989 Tiananmen Squaremassacre radicalized Lai, who transitioned from textiles to media, founding Next magazine and Apple Daily to champion democracy. Realizing the brutality of the Chinese Communist Party, he used his wealth to support the student movement and expose regime corruption. As the 1997 handover approached, Lai converted to Catholicism, influenced by his wife and pro-democracy peers, seeking spiritual protection and a moral anchor against the coming political storm. NUMBER 11 PRISON AND LAWFARE Colleague Mark Clifford, The Troublemaker. Following the 2020 National Security Law, authorities raided Apple Daily, froze its assets, and arrested Lai, forcing the newspaper to close. Despite having the means to flee, Lai chose to stay and face imprisonment as a testament to his principles. Now held in solitary confinement, he is subjected to "lawfare"—sham legal proceedings designed to silence him—while he spends his time sketching religious images, remaining a symbol of resistance against Beijing's tyranny. NUMBER 12 FOUNDING OPENAI Colleague Keach Hagey, The Optimist. In 2016, Sam Altman, Greg Brockman, and Ilya Sutskever founded OpenAI as a nonprofit research lab to develop safe artificial general intelligence (AGI). Backed by investors like Elon Musk and Peter Thiel, the organization aimed to be a counterweight to Google's DeepMind, which was driven by profit. The team relied on massive computing power provided by GPUs—originally designed for video games—to train neural networks, recruiting top talent like Sutskever to lead their scientific efforts. NUMBER 13 THE ROOTS OF AMBITION Colleague Keach Hagey, The Optimist. Sam Altman grew up in St. Louis, the son of an idealistic developer and a driven dermatologist mother who instilled ambition and resilience in her children. Altmanattended the progressive John Burroughs School, where his intellect and charisma flourished, allowing him to connect with people on any topic. Though he was a tech enthusiast, his ability to charm others defined him early on, foreshadowing his future as a master persuader in Silicon Valley. NUMBER 14 SILICON VALLEY KINGMAKER Colleague Keach Hagey, The Optimist. At Stanford, Altman co-founded Loopt, a location-sharing app that won him a meeting with Steve Jobs and a spot in the App Store launch. While Loopt was not a commercial success, the experience taught Altman that his true talent lay in investing and spotting future trends rather than coding. He eventually succeeded Paul Graham as president of Y Combinator, becoming a powerful figure in Silicon Valley who could convince skeptics like Peter Thiel to back his visions. NUMBER 15 THE BLIP AND THE FUTURE Colleague Keach Hagey, The Optimist. The viral success of ChatGPT shifted OpenAI's focus from safety to commercialization, despite early internal warnings about the existential risks of AGI. Tensions over safety and Altman's management style led to a "blip" where the nonprofit board fired him, only for him to be quickly reinstated due to employee loyalty. Elon Musk, having lost a power struggle for control of the organization, severed ties, leaving Altman to lead the race toward AGI. NUMBER 16

Danny In The Valley
Holiday Special! Part 2: Has AI already taken your job?

Danny In The Valley

Play Episode Listen Later Jan 1, 2026 47:57


Continuing our big name-dropping look-back with Sam Altman, Lisa Su, Sebastian Siemiatkowski, Satya Nadella, Matthew Prince, Arthur Mensch, Sir Demis Hassabis, Marc Benioff, and Dario Amodei, this is the second special Christmas edition of the pod – and this time we're looking at what we've learned about the impact of AI on the real world since the Tech Pod started in October 2024 with DeepMind's Sir Demis Hassabis. From robotaxis to listening pendants: what does AI look like in real life? How is it being used in business? And will there be any jobs left? Hosted on Acast. See acast.com/privacy for more information.

Moviewallas
Episode 584 – Marty Supreme / The Testament of Ann Lee

Moviewallas

Play Episode Listen Later Dec 20, 2025 32:17


Moviewallas Episode 584 Moviewallas is on YouTube! Welcome back to Moviewallas, your weekly home for thoughtful film reviews, movie news, and lively banter. In Episode 584, Joe, Rashmi, and Yazdi cover two feature films — one a hyper-stylized period sports drama, the other a spiritual documentary — along with three standout shorts and a few offhand streaming picks. Streaming Picks Quick takes on what's worth checking out when you're short on time but craving something new. Left-Handed Girl A haunting, understated movie about isolation and subtle power shifts. Quiet but potent. Bad Shabbos A funny, chaotic, culturally specific film that mines family tension for full cringe and full laughs. The Thinking Game An elegant, intimate documentary from DeepMind about human-AI collaboration and the nature of thought itself. Watch it here: Marty Supreme Directed by Josh Safdie, this highly stylized table tennis drama stars Timothée Chalamet as a rising sports star in 1950s New York. Between its dazzling craft and period detail, it raises great conversation about competition, image, and identity. The Testament of Ann Lee This meditative documentary explores the history and philosophy of Ann Lee, the founder of the Shakers. With a calm pace and introspective tone, the film looks at belief systems, community, and control. PLUS… Movie-watching marathons in hotel rooms Rashmi's full immersion strategy Why Joe taps out after movie #3 Yazdi channels Roger Ebert Like, comment, and subscribe if you love smart, spoiler-free film conversations with genuine banter and discovery. Seen The Thinking Game? Let us know what you thought! Hosted by: Joe, Rashmi & Yazdi Watch on YouTube or wherever you get your podcasts Follow us on Instagram and Twitter: @moviewallas www.moviewallas.com Timestamps 00:00 – Start 02:32 – Streaming Picks 03:14 – Left-Handed Girl 04:37 – Bad Shabbos 06:17 – The Thinking Game https://youtu.be/d95J8yzvjbQ 09:59 – Marty Supreme (dir. Josh Safdie) 22:16 – The Testament of Ann Lee #Moviewallas #MoviePodcast #MartySupreme #JoshSafdie #TimotheeChalamet #TestamentOfAnnLee #ShortFilms #TheThinkingGame #DeepMind #DocumentaryFilm #AIandCreativity #FilmReview #TooManyMoviesTooLittleTime

Drive With Andy
TFS#246 - Chris Szegedy on Co-Founding xAI with Elon Musk & the Future of Truth-Seeking AI

Drive With Andy

Play Episode Listen Later Dec 19, 2025 86:41


Christian Szegedy is a renowned AI researcher and entrepreneur recognised for his contributions to deep learning. He spent over 12 years at Google, advancing large-scale AI systems in computer vision, deep learning, and formal reasoning, and became recognized for his pioneering contributions to adversarial machine learning.Later, he co-founded xAI with Elon Musk and is now Chief Scientist at Morph Labs and founder of Math Incorporated. He focuses on verified superintelligence, aiming to bring mathematical rigor and formal verification to AI for greater reliability, safety, and trust.https://www.linkedin.com/in/christian-szegedy-bb284816https://x.com/ChrSzegedyhttps://www.math.inc/CHAPTERS:0:00 – Introduction1:20 – Meet Christian Szegedy2:32 – Why Chris left xAI and the idea of “verified superintelligence”3:07 – Using AI to formally verify mathematical proofs5:41 – Why Chris spun out of Morph Labs to start Math Incorporated6:48 – How new Math Incorporated is and what stage the company is at8:09 – How verified AI could impact everyday AI use10:07 – Multi-step AI verification workflow11:56 – How ChatGPT decides what a “good” response is12:42 – Was Chris always this technical and math-focused?13:40 – Chris' family background and his mathematically gifted brothers14:16 – Chris talks about his son, who loves mathematics at a young age15:00 – How he teaches his son about math and coding without overusing AI15:57 – What happens when humans cognitively offload everything to AI18:03 – Will AI eliminate jobs and lead to universal basic income?20:20 – Career advice for people in their 30s in an AI-driven economy22:55 – Are people becoming allergic to AI-generated content?24:22 – Adversarial AI: How to verify whether content is real or AI-generated25:39 – How the Pope used AI to generate a tweet about AI26:45 – Why Chris is deeply passionate about formal verification27:29 – Formal verification in simple terms32:21 – Is AI smart enough today to reason from axioms?34:38 – Why formal verification exists but isn't widely adopted36:43 – The biggest bottlenecks slowing automated verification38:48 – What's limiting AI from verifying math papers instantly?40:40 – Current team at Math Incorporated41:31 – Chris' hiring philosophy and working with young talent43:24 – Co-founding xAI with Elon Musk45:50 – Why meetings with Elon Musk were so long48:06 – How technically deep Elon Musk really is49:22 – Key lessons Chris learned working closely with Elon50:59 – Elon Musk's goal for xAI51:32 – Should people still learn to code in the age of AI?52:45 – The best programming languages to start with today53:55 – Learning just enough code to avoid being “blind”55:35 – Using AI to automate podcast clip distribution56:18 – How Chris personally uses AI on a day-to-day basis58:11 – Where Grok AI scrapes their data59:09 – What most people misunderstand about the next 5 yrs of AI1:00:35 – AI integration into the real world and robotics1:03:06 – Formal vs. informal AI and truth-seeking systems1:04:08 – Can formal AI help prevent unsafe or deceptive AI?1:05:58 – Chris' thoughts on Elon Musk's goal for the most truth-seeking AI1:06:33 – Chris shares some specifications he is pushing right now1:07:18 – The limits of formalizing concepts like “cat detection”1:11:17 – What is chip and chip verification?1:13:39 – What was the first chip ever made?1:16:02 – How logic gates shrank from calculators to iPhones1:17:23 – Chris shares where the world's most advanced chips are made1:19:38 – Chris talks about living in America vs. Europe and returning to Hungary1:20:20 – Why Chris hasn't started a company with his brothers1:21:22 – Will AI be winner-takes-all or stay competitive?1:22:23 – How formal AI competes with informal reasoning models1:22:58 – Chris talks about DeepMind by Google1:24:30 – Chris' recent life discoveries1:25:10 – Chris' personal goals for the next 6 months1:25:35 – Connect with Chris1:26:18 – Outro

The MAD Podcast with Matt Turck
DeepMind Gemini 3 Lead: What Comes After "Infinite Data"

The MAD Podcast with Matt Turck

Play Episode Listen Later Dec 18, 2025 54:56


Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn't just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google's most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren't dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.Google DeepMindWebsite - https://deepmind.googleX/Twitter - https://x.com/GoogleDeepMindSebastian BorgeaudLinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/X/Twitter - https://x.com/borgeaud_sFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold intro: “We're ahead of schedule” + AI is now a system(00:58) – Oriol's “secret recipe”: better pre- + post-training(02:09) – Why AI progress still isn't slowing down(03:04) – Are models actually getting smarter?(04:36) – Two–three years out: what changes first?(06:34) – AI doing AI research: faster, not automated(07:45) – Frontier labs: same playbook or different bets?(10:19) – Post-transformers: will a disruption happen?(10:51) – DeepMind's advantage: research × engineering × infra(12:26) – What a Gemini 3 pre-training lead actually does(13:59) – From Europe to Cambridge to DeepMind(18:06) – Why he left RL for real-world data(20:05) – From Gopher to Chinchilla to RETRO (and why it matters)(20:28) – “Research taste”: integrate or slow everyone down(23:00) – Fixes vs moonshots: how they balance the pipeline(24:37) – Research vs product pressure (and org structure)(26:24) – Gemini 3 under the hood: MoE in plain English(28:30) – Native multimodality: the hidden costs(30:03) – Scaling laws aren't dead (but scale isn't everything)(33:07) – Synthetic data: powerful, dangerous(35:00) – Reasoning traces: what he can't say (and why)(37:18) – Long context + attention: what's next(38:40) – Retrieval vs RAG vs long context(41:49) – The real boss fight: evals (and contamination)(42:28) – Alignment: pre-training vs post-training(43:32) – Deep Think + agents + “vibe coding”(46:34) – Continual learning: updating models over time(49:35) – Advice for researchers + founders(53:35) – “No end in sight” for progress + closing

Let's Talk AI
#228 - GPT 5.2, Scaling Agents, Weird Generalization

Let's Talk AI

Play Episode Listen Later Dec 17, 2025 86:42


Our 228th episode with a summary and discussion of last week's big AI news!Recorded on 12/12/2025Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:OpenAI's latest model GPT-5.2 demonstrates improved performance and enhanced multi-modal capabilities but comes with increased costs and a different knowledge cutoff date.Disney invests $1 billion in OpenAI to generate Disney character content, creating unique licensing agreements across characters from Marvel, Pixar, and Star Wars franchises.The U.S. government imposes new AI chip export rules involving security reviews, while simultaneously moving to prevent states from independently regulating AI.DeepMind releases a paper outlining the challenges and findings in scaling multi-agent systems, highlighting the complexities of tool coordination and task performance.Timestamps:(00:00:00) Intro / Banter(00:01:19) News PreviewTools & Apps(00:01:58) GPT-5.2 is OpenAI's latest move in the agentic AI battle | The Verge(00:08:48) Runway releases its first world model, adds native audio to latest video model | TechCrunch(00:11:51) Google says it will link to more sources in AI Mode | The Verge(00:12:24) ChatGPT can now use Adobe apps to edit your photos and PDFs for free | The Verge(00:13:05) Tencent releases Hunyuan 2.0 with 406B parametersApplications & Business(00:16:15) China set to limit access to Nvidia's H200 chips despite Trump export approval(00:21:02) Disney investing $1 billion in OpenAI, will allow characters on Sora(00:24:48) Unconventional AI confirms its massive $475M seed round(00:29:06) Slack CEO Denise Dresser to join OpenAI as chief revenue officer | TechCrunch(00:31:18) The state of enterprise AIProjects & Open Source(00:33:49) [2512.10791] The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality(00:36:27) Claude 4.5 Opus' Soul DocumentResearch & Advancements(00:43:49) [2512.08296] Towards a Science of Scaling Agent Systems(00:48:43) Evaluating Gemini Robotics Policies in a Veo World Simulator(00:52:10) Guided Self-Evolving LLMs with Minimal Human Supervision(00:56:08) Martingale Score: An Unsupervised Metric for Bayesian Rationality in LLM Reasoning(01:00:39) [2512.07783] On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models(01:04:42) Stabilizing Reinforcement Learning with LLMs: Formulation and Practices(01:09:42) Google's AI unit DeepMind announces UK 'automated research lab'Policy & Safety(01:10:28) Trump Moves to Stop States From Regulating AI With a New Executive Order - The New York Times(01:13:54) [2512.09742] Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs(01:17:57) Forecasting AI Time Horizon Under Compute Slowdowns(01:20:46) AI Security Institute focuses on AI measurements and evaluations(01:21:16) Nvidia AI Chips to Undergo Unusual U.S. Security Review Before Export to China(01:22:01) U.S. Authorities Shut Down Major China-Linked AI Tech Smuggling NetworkSynthetic Media & Art(01:24:01) RSL 1.0 has arrived, allowing publishers to ask AI companies pay to scrape content | The VergeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Road to Autonomy
Episode 355 | Self-Driving on 33 Watts: How HYPR Labs Trained a Model for Just $850

The Road to Autonomy

Play Episode Listen Later Dec 16, 2025 36:46


Tim Kentley Klay, CEO & co-founder of HYPRLABS, joined Grayson Brulte on The Road to Autonomy podcast to discuss how the company is achieving autonomous driving in downtown San Francisco using just 33 watts of compute and zero simulation or HD maps. By prioritizing "learning velocity," HYPR utilizes an end-to-end neural network that learns continuously from real-world driving data, avoiding the structural noise injected by classical simulation and hand-coded heuristics.While the industry often relies on massive engineering teams and brute-force compute, HYPRLABS is executing a high-efficiency strategy with a team of just four engineers and a foundational model trained for only $850. Drawing inspiration from DeepMind's AlphaZero, the company allows the AI to model the environment without predefined rules, using their autonomous vehicle fleet as a validation platform for a new category of robots launching next year.Episode Chapters0:00 Introduction to HYPRDRIVE1:30 HYPRDRIVE5:40 Learning Velocity 8:10 Building HYPR12:23 Training the System 18:55 The Origins of the HYPR Approach 21:36 Building Trust23:35 Simulation 27:07 $850 to Train the Model30:44 HYPR Robots33:22 Cameras35:16 What's Next Recorded on Sunday, December 14, 2025--------About The Road to AutonomyThe Road to Autonomy provides market intelligence and strategic advisory services to institutional investors and companies, delivering insights needed to stay ahead of emerging trends in the autonomy economy™. To learn more, say hello (at) roadtoautonomy.com.Sign up for This Week in The Autonomy Economy newsletter: https://www.roadtoautonomy.com/ae/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Let's Talk AI
#227 - Jeremie is back! DeepSeek 3.2, TPUs, Nested Learning

Let's Talk AI

Play Episode Listen Later Dec 9, 2025 94:40


Our 227th episode with a summary and discussion of last week's big AI news!Recorded on 12/05/2025Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Deep Seek 3.2 and Flux 2 release, showcasing advancements in open-source AI models for natural language processing and image generation respectively.Amazon's new AI chips and Google's TPUs signal potential shifts in AI hardware dominance, with growing competition against Nvidia.Anthropic's potential IPO and OpenAI's declared ‘Code Red' indicate significant moves in the AI business landscape, including high venture funding rounds for startups.Key research papers from DeepMind and Google explore advanced memory architectures and multi-agent systems, indicating ongoing efforts to enhance AI reasoning and efficiency.Timestamps:(00:00:10) Intro / Banter(00:02:42) News PreviewTools & Apps(00:03:30) Deepseek 3.2 : New AI Model is Faster, Cheaper and Smarter(00:23:22) Black Forest Labs launches Flux.2 AI image models to challenge Nano Banana Pro and Midjourney(00:28:00) Sora and Nano Banana Pro throttled amid soaring demand | The Verge(00:29:34) Mistral closes in on Big AI rivals with new open-weight frontier and small models | TechCrunch(00:31:41) Kling's Video O1 launches as the first all-in-one video model for generation and editing(00:34:07) Runway rolls out Gen 4.5 AI video model that beats Google, OpenAIApplications & Business(00:35:18) NVIDIA's Partners Are Beginning to Tilt Toward Google's TPU Ecosystem, with Foxconn Reportedly Securing TPU Rack Orders(00:40:37) Amazon releases an impressive new AI chip and teases an Nvidia-friendly roadmap | TechCrunch(00:43:03) OpenAI declares ‘code red' as Google catches up in AI race | The Verge(00:46:20) Anthropic reportedly preparing for massive IPO in race with OpenAI: FT(00:48:41) Black Forest Labs raises $300M at $3.25B valuation | TechCrunch(00:49:20) Paris-based AI voice startup Gradium nabs $70M seed | TechCrunch(00:50:10) OpenAI announced a 1 GW Stargate cluster in Abu Dhabi(00:53:22) OpenAI's investment into Thrive Holdings is its latest circular deal(00:55:11) OpenAI to acquire Neptune, an AI model training assistance startup(00:56:11) Anthropic acquires developer tool startup Bun to scale AI coding(00:56:55) Microsoft drops AI sales targets in half after salespeople miss their quotas - Ars TechnicaProjects & Open Source(00:57:51) [2511.22570] DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning(01:01:52) Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving MemoryResearch & Advancements(01:05:44) Nested Learning: The Illusion of Deep Learning Architecture(01:13:30) Multi-Agent Deep Research: Training Multi-Agent Systems with M-GRPO(01:15:50) State of AI: An Empirical 100 Trillion Token Study with OpenRouterPolicy & Safety(01:21:52) Trump signs executive order launching Genesis Mission AI project(01:24:42) OpenAI has trained its LLM to confess to bad behavior | MIT Technology Review(01:29:34) US senators seek to block Nvidia sales of advanced chips to ChinaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Science Friday
How Alphafold Has Changed Biology Research, 5 Years On

Science Friday

Play Episode Listen Later Nov 18, 2025 18:08


Proteins are crucial for life. They're made of amino acids that “fold” into millions of different shapes. And depending on their structure, they do radically different things in our cells. For a long time, predicting those shapes for research was considered a grand biological challenge.But in 2020, Google's AI lab DeepMind released Alphafold, a tool that was able to accurately predict many of the structures necessary for understanding biological mechanisms in a matter of minutes. In 2024, the Alphafold team was awarded a Nobel Prize in chemistry for the advance.Five years later after its release, Host Ira Flatow checks in on the state of that tech and how it's being used in health research with John Jumper, one of the lead scientists responsible for developing Alphafold.Guest: John Jumper, scientist at Google Deepmind and co-recipient of the 2024 Nobel Prize in chemistry.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

Daily Tech News Show
Everything Is Changing at Apple - DTNS 5147

Daily Tech News Show

Play Episode Listen Later Nov 17, 2025 24:12


Our thoughts on the Tim Cook succession and the possible new iPhone release schedule. Plus, DeepMind gets better at weather and the Tilly Norwood people are back at it.Starring Tom Merritt and Robb Dunewood.Show notes can be found here. Hosted on Acast. See acast.com/privacy for more information.