Podcasts about yann lecun

  • 223PODCASTS
  • 421EPISODES
  • 50mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 25, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about yann lecun

Latest podcast episodes about yann lecun

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe

Crazy Wisdom
Episode #533: The Universe Doing Its Thing: AI Evolution Is Already Here

Crazy Wisdom

Play Episode Listen Later Feb 20, 2026 73:51


In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Markus Buehler, the McAfee Professor of Engineering at MIT, to explore how seemingly different systems—from proteins and music to knowledge structures and AI reasoning—share underlying patterns through hierarchy, self-organization, and scale-free networks. The conversation ranges from the limits of current AI interpolation versus true discovery (using the fire-to-fusion example), to the emergence of agent swarms and their non-linear effects, to practical questions about ontologies, knowledge graphs, and whether humans will remain necessary in the creative discovery process. Markus discusses his lab's work automating scientific discovery through AI agents that can generate hypotheses, run simulations, and even retrain themselves, while Stewart shares his own experiences building applications with AI coding agents and grapples with questions about intellectual property, material science constraints, and the future of human creativity in an AI-abundant world.Timestamps00:00 - Introduction to Marcus Buehler's work on knowledge graphs, structural grammar across proteins, music, and AI reasoning05:00 - Discussion of AI discovery versus interpolation, using fire and fusion as examples of fundamental versus incremental innovation10:00 - Language models as connective glue between agents, enabling communication despite imperfect outputs and canonical averaging15:00 - Embodiment and agency in AI systems, creating adversarial agents that challenge theories and expand world models20:00 - Emergent properties in materials and AI, comparing dislocations in metals to behaviors in agent swarms25:00 - Human role-playing and phase separation in society, parallels to composite materials and heterogeneity30:00 - Physical world challenges, atom-by-atom manufacturing at MIT.nano, limitations of lithography machines35:00 - Synthetic biology as alternative to nanotechnology, programming microorganisms for materials discovery40:00 - Intellectual property debates, commodification of AI models, control layers more valuable than model architecture45:00 - Automation of ontologies, agent self-testing, daughter's coding success at age 1150:00 - Graph theory for knowledge compression, neurosymbolic approaches combining symbolic and neural methods55:00 - Nonlinear acceleration in AI, emergence from accumulated innovations, restaurant owner embracing AI01:00:00 - Future generations possibly rejecting AI, democratization of knowledge, social media as real-time scientific discourseKey Insights1. Universal Patterns Across Disciplines: Seemingly different systems in nature—proteins, music, social networks, and knowledge itself—share fundamental structural patterns including hierarchy, self-organization, and scale-free networks. This commonality allows creative thinkers to draw insights across disciplines, applying principles from one domain to solve problems in another. As an engineer and materials scientist, Buehler has leveraged these isomorphisms to advance scientific understanding by mapping the "plumbing" of different systems onto each other, revealing hidden relationships that enable extrapolation beyond what's observable in any single domain.2. The Discovery Versus Interpolation Problem: Current AI systems, particularly large language models, excel at interpolation—recombining existing knowledge in new ways—but struggle with genuine discovery that requires fundamental rewiring of world models. Using the example of fire versus fusion, Buehler explains that an AI trained on combustion chemistry would propose bigger fires or new fuels, but couldn't conceive of fusion because that requires stepping back to more fundamental physics. True discovery demands the ability to recognize when existing theories have boundaries and to develop entirely new frameworks, something current AI architectures aren't designed to achieve due to their training objective of predicting the most likely outcome.3. The Role of Ontologies and Knowledge Graphs: While some AI researchers argue that ontologies are unnecessary because models form internal representations, Buehler advocates for explicit knowledge graphs as essential discovery tools. External ontologies provide sharp, analytical, symbolic representations that complement the fuzzy internal representations of neural networks. They enable verification of rare connections—like obscure papers that might hold key insights—which would be averaged away in standard AI training. This neurosymbolic approach combines the generalization capabilities of neural networks with the precision of formal knowledge structures, creating more powerful discovery systems.4. Emergent Properties and Agent Swarms: Just as materials science shows that collections of atoms exhibit properties impossible to predict from individual components, AI agent swarms demonstrate emergent behaviors beyond single models. When agents are incentivized not just to answer questions but to challenge each other adversarially, propose theories, and test hypotheses, they can spawn new copies of themselves and evolve understanding beyond their initial programming. This emergence isn't surprising from a materials science perspective—dislocations, grain boundaries, and other collective phenomena only appear at scale, fundamentally determining material behavior in ways unpredictable from studying just a few atoms.5. The Commoditization of Intelligence: The fundamental AI models themselves are becoming commodities, as evidenced by events like the Moldbug phenomenon where people built agents using various providers interchangeably. The real value is shifting from who has the smartest model to how models are orchestrated, integrated, and deployed. This parallels historical technology adoption patterns—just as we moved past debating who makes the best electricity to focusing on applications, AI is transitioning from a horse race over model capabilities to questions of infrastructure, energy, access speed, and agent coordination at the systems level.6. Human-AI Collaboration and Creative Control: Rather than wholesale replacement, AI enables humans to operate in an intensely creative space as orchestrators sampling from vast possibility spaces. Similar to how Buehler's 11-year-old daughter now builds sophisticated applications that would have required professional developers years ago, AI democratizes access to capabilities while humans retain the creative judgment about direction and meaning. The human role becomes curating emergence, finding rare connections, playing at the edges of knowledge, and exercising the kind of curiosity-driven exploration that AI systems lack without embodied stakes in their own survival and continuation.7. Technology as Evolutionary Inevitability: The development of AI represents not an unnatural threat but the next stage of human evolution—an extension of our innate drive to build models of ourselves and our world. From cave paintings to partial differential equations to artificial intelligence, humans continuously create increasingly sophisticated representations and tools. Attempting to stop this technological evolution is futile; instead, the focus should be on steering it ...

Scrum Master Toolbox Podcast
BONUS The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception With Daniel Sodickson

Scrum Master Toolbox Podcast

Play Episode Listen Later Feb 19, 2026 37:18


BONUS: The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception What if the next leap in AI isn't about thinking, but about seeing? In this episode, Daniel Sodickson—physicist, medical imaging pioneer, and author of "The Future of Seeing"—argues we're on the edge of a vision revolution that will change medicine, technology, and even human perception itself. From Napkin Sketch to Parallel Imaging "I was doodling literally on a napkin in a piano bar in Boston and came up with a way to get multiple lines at once. I ran to my mentor and said, 'Hey, I have this idea, never mind my paper.' And he said, 'Who are you again? Sure, why not.' And it worked."   Daniel's journey into imaging began with a happy accident. While studying why MRI couldn't capture the beating heart fast enough, he realized the fundamental bottleneck: MRI machines scan one line at a time, like old CRT screens. His insight—imaging in parallel to capture multiple lines simultaneously—revolutionized the field. This connection between natural vision (our eyes capture entire scenes at once) and artificial imaging systems set him on a 29-year journey exploring how we can see what was once invisible. Upstream AI: Changing What We Measure "Most often when we envision AI, we think of it as this downstream process. We generate our data, make our image, then let AI loose instead of our brains. To me, that's limited. Why aren't we thinking of tasks that AI can do that no human could ever do?"   Daniel introduces a crucial distinction between "downstream" and "upstream" AI. Downstream AI takes existing images and interprets them—essentially competing with human experts. Upstream AI changes the game entirely by redesigning what data we gather in the first place. If we know a machine learning system will process the output, we can build cheaper, more accessible sensors. Imagine monitoring devices built into beds or chairs that don't produce perfect images but can detect whether you've changed since your last comprehensive scan. AI fills in the gaps using learned context about how bodies and signals behave. The Power of Context and Memory "The world we see is a lie. Two eyes are not nearly enough to figure out exactly where everything is in space. What the brain is doing is using everything it's learned about the world—how light falls on surfaces, how big people are compared to objects—and filling in what's missing."   Our brains don't passively receive images; they actively construct reality using massive amounts of learned context. Daniel argues we can give imaging machines the same superpower. By training AI on temporal patterns—how healthy bodies change over time, what signals precede disease—we create systems with "memory" that can make sophisticated judgments from incomplete data. Today's signal, combined with your history and learned patterns from millions of others, becomes far more informative than any single pristine image could be. From Reactive to Proactive Health "I've started to wonder why we use these amazing MRI machines only once we already know you're sick. Why do we use them reactively rather than proactively?"   This question drove Daniel to leave academia after 29 years and join Function Health, a company focused on proactive imaging and testing to catch disease before it develops. The vision: a GPS for your health. By combining regular blood panels, MRI scans, and wearable data, AI can monitor whether you look like yourself or have changed in worrisome ways. The goal isn't replacing expert diagnosis but creating an early warning system that surfaces problems while they're still easily treatable. Seeing How We See "Sometimes when I'm walking along, everything I'm seeing just fades away. And what I see instead is how I'm seeing. I imagine light bouncing off of things and landing in my eye, this buzz of light zipping around as fast as anything in the universe can go."   After decades studying vision, Daniel experiences the world differently. He finds himself deconstructing his own perception—tracing sight lines, marveling at how we've evolved to turn chaos of sensation into spatially organized information. This meta-awareness extends to his work: every new imaging modality has driven scientific discovery, from telescopes enabling the Copernican Revolution to MRI revealing the living body. We're now at another inflection point where AI doesn't just interpret images but transforms our relationship with perception itself.   In this episode, we refer to An Immense World: How Animal Senses Reveal the Hidden Realms Around Us by Ed Young on animal perception, and A Path Towards Autonomous Machine Intelligence by Yann LeCun on building AI more like the brain.   About Daniel Sodickson Daniel K. Sodickson is a physicist in medicine and chief medical scientist at Function Health. Previously at NYU, and a gold medalist and past president of the International Society for Magnetic Resonance in Medicine, he pioneers AI-driven imaging and is author of The Future of Seeing.

ai power vision future medicine transform gps perception context nyu mri crt international society yann lecun ed young copernican revolution magnetic resonance hidden realms around us immense world how animal senses reveal
Zebras & Unicorns
Jetzt kommen die World Models - und lassen ChatGPT und Co alt aussehen

Zebras & Unicorns

Play Episode Listen Later Feb 19, 2026 22:27


Heute sind wir es gewohnt, uns von KI-Bots Texte, Dokumente und Code ausspucken zu lassen. Doch während alle noch über ChatGPT, OpenClaw und Co reden, braut sich schon die nächste KI-Revolution zusammen. So genannte World Models entstehen, und sie funktionieren fundamental anders als die Transformer-Modelle, die wir heute nutzen. Was kommt da auf uns zu?Jakob Steinschaden, Mitgründer von Trending Topics und newsrooms, und Matteo Rosoli, CEO von newsrooms, sprechen im heutigen Podcast über:

Tech&Co
Yann LeCun, CEO d'AMI Labs – 10/02

Tech&Co

Play Episode Listen Later Feb 11, 2026 3:31


Invité, fonction, était l'invité de François Sorel dans Tech & Co, la quotidienne, ce jeudi 24 septembre. Il/Elle [est revenu(e) / a abordé / s'est penché(e) sur] [SUJET] sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.

OVNI's
OVNIs Hors-Série #7 - Zero to One - L'Europe peut-elle gagner la bataille de l'IA ?

OVNI's

Play Episode Listen Later Feb 10, 2026 26:14


Animée par Julien Marbouty, cette conférence propose un échange dense et incarné avec Roxanne Varza, autour de l'attractivité de la France et de l'Europe pour les entrepreneurs technologiques. À travers son parcours personnel - de la Silicon Valley à Paris - et son rôle à la tête de Station F, Roxanne Varza revient sur ce qui fait aujourd'hui la force de l'écosystème français : la qualité des talents, la fidélité des équipes, la formation, le cadre de vie et une ouverture internationale renforcée par des politiques de visas attractives. L'échange s'appuie notamment sur des exemples concrets de fondateurs et d'investisseurs qui font le choix assumé d'implanter leurs projets en France. La discussion s'élargit ensuite aux grands enjeux actuels de l'innovation, avec un focus marqué sur l'intelligence artificielle, l'évolution des modèles entrepreneuriaux et le rôle clé des structures d'accompagnement. Roxanne Varza partage sa vision d'un écosystème en mutation rapide, où l'exécution, la vitesse et l'ambition collective deviennent déterminantes, tout en plaidant pour une Europe plus intégrée et plus lisible pour les entrepreneurs. Une conférence lucide et optimiste, qui invite à dépasser les discours défaitistes pour construire une dynamique technologique européenne forte, ancrée dans ses valeurs et résolument tournée vers l'avenir.[00:00:00] Introduction – Présentation de l'épisode et du cadre Zero to One Nantes[00:00:41] Accueil de l'invitée – Présentation de Roxanne Varza par Julien Marbouty[00:01:56] Parcours personnel – Enfance entre les États-Unis, l'Iran et premiers liens avec la France[00:03:32] Identité et naturalisation – Pourquoi la France est devenue un choix de cœur[00:04:23] Pourquoi installer une startup en France ? – Le cas Yann LeCun / AI Labs[00:05:07] Talents, coûts et fidélité – Les vrais avantages compétitifs de l'écosystème français[00:06:12] Formation et grandes écoles – Le rôle clé de l'excellence académique[00:07:01] Financement et attractivité internationale – Ce qui a changé ces dernières années[00:07:54] Station F en chiffres – Nationalités, visas et diversité des profils[00:09:17] Ambition européenne – Ce qui manque encore pour créer plus de “Yann LeCun”[00:10:56] EU Inc. et simplification du business en Europe[00:12:24] Changer les regards – Comment convaincre investisseurs et décideurs internationaux[00:14:07] Intelligence artificielle – Recherche fondamentale vs applications[00:14:45] Panorama IA à Station F – Répartition des startups et tendances actuelles[00:16:11] Exemples de startups IA remarquables accompagnées par Station F[00:17:41] Réussir aujourd'hui – Momentum, vitesse d'exécution et repeat founders[00:18:45] Adapter les structures d'accompagnement à l'IA[00:20:03] Partenariats corporate – Ce qui fait un programme vraiment utile aux fondateurs[00:22:41] Outils IA utilisés en interne à Station F[00:23:50] Message de conclusion – Optimisme, collectif et avenir de l'écosystème européen[00:25:19] Remerciements et clôtureDue diligence inversée, transparence et maturité de l'écosystème.

SeamlessMD Podcast
218: Nabla's AI Executive Summit Debrief: Yann LeCun's World Models, Where Health Systems are Going with AI after Ambient, and Things CMIOs will Never Say on Stage

SeamlessMD Podcast

Play Episode Listen Later Feb 5, 2026 47:18


In this special duo episode, Alan Sardana sits down with SeamlessMD Co-founder and CEO Joshua Liu, MD, to unpack the biggest ideas from the Nabla Accelerate AI Executive Summit. Josh shares what he heard behind closed doors from CMIOs, CIOs, and digital health leaders—from why AI scribes unlocked AI adoption, to what comes next in healthcare automation, trust, and ROI. They explore where today's AI may fall short, drawing inspiration from Yann LeCun's keynote ideas around World Models, how transparency matters more than “AI-first” branding, and why humility and execution—not hype—ultimately determine which AI solutions succeed in healthcare.

Monde Numérique - Jérôme Colombain
☕️ GRAND DEBRIEF (jan. 26) – CES, voiture autonome et indépendance numérique

Monde Numérique - Jérôme Colombain

Play Episode Listen Later Feb 1, 2026 61:37


Robots, intelligence artificielle, dépendance aux géants américains, nouvelles lois sur Internet… Le mois de janvier a concentré toutes les fractures du numérique. Dans ce Grand Débrief, on prend le temps d'analyser ce que ces signaux disent vraiment de l'avenir de la tech.Le Grand Debrief vous est proposé en partenariat avec Free ProAvec François Sorel (Tech&Co) et Bruno Guglielminetti (Mon Carnet)CES 2026 : un salon moins spectaculaire, mais plus révélateurLe Consumer Electronics Show de Las Vegas a-t-il perdu de sa magie ? Moins d'annonces grand public, moins d'objets “wahou”, mais un salon qui confirme malgré tout plusieurs tendances lourdes : automatisation, robotique, intelligence artificielle omniprésente et montée en puissance des acteurs asiatiques. Bref, un CES 2026 plus sobre mais qui reflète mieux que jamais l'état réel de l'industrie technologique mondiale.- Voitures autonomes : la réalité derrière le fantasmeLes véhicules autonomes avancent vite… mais pas toujours là où on l'imagine. Waymo, Zoox ou Uber multiplient les expérimentations de niveau 4, capables de circuler sans conducteur dans des zones bien définies. En revanche, le niveau 5, celui d'une voiture autonome partout et en toutes circonstances, n'existe toujours pas.Contrairement au discours d'Elon Musk, le FSD de Tesla reste officiellement classé niveau 2, loin des critères d'autonomie totale.- Robots humanoïdes et “IA physique” : le vrai tournantLe CES 2026 a marqué une étape importante : le passage de l'IA logicielle à l'IA incarnée. Robots humanoïdes, machines domestiques intelligentes, automatisation du monde réel… la robotique entre dans un nouveau cycle. Si l'électromécanique et l'équilibre sont désormais maîtrisés, le véritable verrou reste l'intelligence elle-même.Les modèles d'IA actuels sont-ils capables de comprendre le monde physique, ou faudra-t-il changer de paradigme, comme le défend notamment Yann LeCun ?- La Chine, puissance technologique majeureTrès visible cette année à Las Vegas, la Chine n'est plus dans l'imitation mais dans l'exécution rapide et industrielle. Robots aspirateurs, robots humanoïdes, vidéoprojecteurs, électronique grand public : les innovations chinoises s'imposent par leur qualité et leur vitesse de développement. Un basculement stratégique majeur, qui redessine la concurrence mondiale — et interroge la place de l'Europe.Dépendance à la tech américaine : le réveil européen ?Pendant que les patrons de la tech défilaient au Forum économique mondial de Davos, le Parlement européen adoptait une résolution alertant sur la dépendance numérique de l'Europe. Cloud, logiciels, systèmes d'exploitation, IA : que se passerait-il en cas de tension politique majeure avec les États-Unis ? Faut-il craindre un "kill switch" (coupure totale) ou une dégradation des services ? La question n'est plus théorique, notamment après les menaces commerciales de Donald Trump et les débats autour du Cloud Act. Alors, peut-on réellement se passer de la tech américaine… si oui, à quel prix ?Cloud souverain : solution réelle ou illusion juridique ?AWS, Google et Microsoft multiplient les annonces de clouds souverains européens, comme le projet d'AWS European Sovereign Cloud. Mais une entité juridique locale suffit-elle à garantir une indépendance réelle ? Réseaux sociaux interdits aux mineurs : la fin de la récré ?Dernier grand sujet de ce Débrief : la loi française visant à interdire les réseaux sociaux aux moins de 15 ans. Après la loi sur la protection contre les contenus pornographiques, le RGPD, le DSA ou encore le projet Chat Control, la régulation numérique s'intensifie. Sommes-nous en train d'assister à la fin de l'Internet libre tel qu'on l'a connu ou à une tentative nécessaire de protection face à l'addiction, au temps d'écran et aux effets cognitifs sur les plus jeunes ?-----------♥️ Soutien : https://mondenumerique.info/don

Business Pants
CEOs on ICE, the SEC kills small investors, the manchild economy, and AI navel gazers

Business Pants

Play Episode Listen Later Jan 30, 2026 62:36


Story of the Week (DR):Trump's ICE tactics force CEOs to choose between staying silent and risking White House backlash MMCEOs of Target and Minnesota's Biggest Companies Call for ‘De-Escalation' After ShootingMinnesota workers pressure employers to take action against ICE operationsCEOs, long silent on Trump's immigration crackdown, seem to hit their breaking point over killing of Alex Pretti in MinnesotaTarget's incoming CEO tells staff violence in Minneapolis is 'incredibly painful' – without naming Trump or ICEJan 28: Target Unveils Largest Spring Beauty Assortment Ever — Making Trend-Driven, Expert-Backed Beauty More AccessibleTech's top CEOs mum after ICE killings, while leaders like Reid Hoffman, Yann LeCun speak outICE is going too far': Sam Altman, Jamie Dimon, and more CEOs on the unrest in MinnesotaReid Hoffman says business leaders are wrong to stay silent about the Trump administrationApple's Cook says he's 'heartbroken' by Minneapolis events and has spoken with TrumpCompanies reap $22bn from Trump's immigration crackdownMeta blocks links to ICE List across Facebook, Instagram, and ThreadsAs Big Tech CEOs speak up about violence in Minneapolis, 1 in 3 corporate leaders think ICE tensions are ‘not relevant to their business'How ICE Already Knows Who Minneapolis Protesters AreAgents use facial recognition, social media monitoring and other tech tools not only to identify undocumented immigrants but also to track protesters, current and former officials said.Freefloatanalytics data blast:Palantir Technologies: Continues to be a primary partner. In 2025, they were awarded a $30 million contract to build "ImmigrationOS," a platform designed to provide "near real-time visibility" on individuals for the purpose of streamlining apprehensions and tracking self-deportations. Gender Influence Gap -26%RELX: LexisNexis Risk Solutions: Provides ICE with investigative databases used to track, vet, and target individuals. Their current contract is valued at over $22 million. Gender Influence Gap -24%Thomson Reuters: Supplies ICE with access to massive databases, including over 20 billion license plate scans. This data allows agents to track vehicle movement history and identify where individuals may be living or working. Gender Influence Gap -28%Clearview AI: Recently signed a $3.75 million contract (September 2025) to provide facial recognition technology. While officially limited to certain types of investigations, procurement records suggest its use is expanding. Gender Influence Infinity% (no women on advisory board; Hal Lambert and Richard Schwartz as co-CEOs)King “Bumps”JPMorgan's Dimon sees 10.3% pay bump to $43MDisney CEO Bob Iger's Pay Increased 11.5% to $45.8 Million in 2025Goldman Sachs hikes CEO David Solomon's pay 21% to record $47 millionWells Fargo CEO Charlie Scharf Gets 28% Pay Boost to $40 MillionWhy Starbucks is letting Brian Niccol use the company plane for more personal travel“Following a security review of risks, the Starbucks board of directors made the decision to enhance security measures for Brian,” a company spokesperson said. “This included a decision by the board to require Brian to use private aircraft for all travel.”$96M in 2024; $31M in 2024, including temporary housing expenses in the amount of $371,536; and security expenses in the amount of $1,142,700; and $997,392 in expenses related to his use of Starbucks aircraft for commuting and personal usemedian employee: $17,279. CEO Pay ratio 1,794 to 1 (January 1st: 10:10am)Temporary housing expense ratio: 22:1The docu-bribe: At ‘Melania' Premiere, the President Sees ‘Glamour' and Others See GraftAmazon paid Melania Trump's production company $40 million for the movie and then paid another $35 million to promote it.Guests included:Jordan Belfort: The real-life "Wolf of Wall Street."Director Brett Ratner, accused of rape, sexual assault, sexual harrassment, and homophobic abuse by at least 9 women:Melania Trump documentary marks a post-#MeToo comeback for its directorBrett Ratner was all but exiled from Hollywood after facing sexual misconduct allegations. Trump's win gave him an opening to return.Tim Cook (Apple)Andy Jassy (Amazon)Lisa Su (AMD)Eric Yuan (Zoom)Lynn Martin (President of the NYSE)Larry Culp (GE)Sam Altman (OpenAISatya Nadella (Microsoft)Sundar Pichai (Google)Safra Catz (Oracle):David Brown (Victory Capital)David Ellison (Skydance/Paramount)Marc Benioff (Salesforce)Goodliest of the Week (MM/DR):DR: Diversity on Fortune 50 boards: white men haven't been a majority for 3 years in a rowWhereas about a decade ago, white men held two-thirds of the seats on the top 50 Fortune boards, in 2023, for the first time, they held fewer than 50%. In 2024, that number dropped to 48.4%, but this year it climbed back to 49.7%.Since white men make up about 31% of the U.S. population, they still have been very much overrepresented in all three years.DR: National Shutdown: General strike on January 30 aims to push ICE out of Minnesota. Stores closed, protests scheduled in all 50 statesMM: Delivery Robot Gets Stuck on Train Tracks, Gets Obliterated by LocomotiveMM: Judge greenlights Massachusetts offshore wind project halted by Trump administrationVineyard Wind, which joins Revolution Wind, Empire Wind, and Coastal Virginia Offshore Wind in restarted because lawsAssholiest of the Week (MM):WHICH ASSHOLE DO YOU BLAME: Trump's ICE tactics force CEOs to choose between staying silent and risking White House backlashTrump/ICEHis personal military got orders to be “ethical”, but to fuck up everyone - and recruited specifically targeting Call of Duty players and lonely, angry men who wish they could call their friends “retarded” again but it isn't politically correctPalantir and the ICE industrial complexAlex Karp went out of his way to insist to his disgusted employees that AI and Palantir “bolsters civil liberties”Meanwhile, Palantir employees signed a letter from tech employees pondering whether or not they are actively destroying our country and abetting oligarchsBut Palantir, while making some of the creepiest, most heinous software known to man (I mean, worse than CHINA! And we all HATE CHINA, RIGHT???), has $100m in contracts with ICEIn fact, there's a whole private infrastructure complex that's largely not politically agnostic that's made $22bn from ICE and immigration crackdowns - and it's only been a year! That's some awesome shareholder value illegally sending weeping mothers to countries they don't live in with no due process!CEOs (Target, looking at you) DRThey managed to find a pen and craft a strongly worded letter that asked, pretty please, for “de-escalation”, calling ICE out not by NAME of course, but as a “recent challenge” that created “widespread disruption” - and named the White House only as someone they are “communicating” with. Signed by 60 Minnesota CEOs, co-signed in spirit by the Business Roundtable (though not like, officially), they managed to write a whole 199 words about the execution of a VA nurse whose crime was filming the Gestapo in actionTarget's incoming CEO (obviously not the CURRENT CEO Brian Cornell, he's busy polishing his mahogany chair for board meetings where he will be Executive Chair, making as much as a CEO with none of the responsibilities) also addressed the unlawful and unwarranted arrests of Target employees in Minneapolis by thugs - oh, wait, no he didn't - he said, “The violence and loss of life in our community is incredibly painful.” - IT WAS YOUR EMPLOYEES IN THE CROSSHAIRS, SCHMUCK. Target employees are currently skipping work in Minnesota, but solid leadership.Boards of directorsOur analysis of the boards of the Minnesota 60 showed that nearly half of them sit on each other's boards. Basically, you have a massive groupcoward problem - about 25 of the CEOs sit on some other CEOs board or overlap in some way, and the lawyers that carefully crafted the letter absolutely had to have it run through every other board and company lawyer, a task made easier when half of you are on the board with each other. No need for authenticity when you have collective ass covering.Jeffrey EpsteinIf not for those files, there wouldn't NEED TO BE MURDERS so you look somewhere else!InvestorsIf not for “shareholder value”, we could pay attention to humanity and authentic real world values!WHICH ASSHOLE DO YOU BLAME: As You Sow leads criticism of SEC's updated restrictions on smaller shareholdersSmaller investors!For three decades, small investors have used precatory proposals either as a means to extract more data, a means to improve governance, or a means of advertising - many of the non profits use it as a fundraising tool as much as a means of changeMeanwhile, those proposals have almost entirely failed at the vote - though they HAVE succeeded in increasing our data over time (the long arc of disclosure)Then the zone gets flooded by the anti-woke shareholders looking to de-trans companies, and now we have a massive influx of performative proposalsNow that the insiders are in charge (vs. career bureaucrats), in a six month period, virtually all rights have been revoked with threats of paperwork for non complianceAs a final cherry, they are now trying to keep EXEMPT SOLICITATIONS off the filing docket unless you have $5m in stock, so you can't even file your intent to vote directionally unless you're super richJohn CheveddenThe gadflyfather - if not for being the winningest shareholder in history with a nearly obsessive focus on improving shareholder rights, the most boring of topics, the SEC would probably have ignored the whole thingBut the data shows the SEC is taking the time to blanket ignore everyone BUT Chevedden, responding to affirmatively say no to his proposalsJC, no one likes a repeat champion dynastyThe SECBrain Daly at the SEC is out there suggesting maybe NO ONE should vote proxies while SEC Chair Atkins tried to gaslight the entire investment community by claiming the “government shutdown” made it too hard for the poor ole SEC to do its job, so they just gave companies immunity from proposals in lieu of doing their jobsMeanwhile, Atkins has overseen a steep drop in enforcement of accounting irregularities and reporting while simultaneously green lighting crypto scams and Exxon's new “retail vote” capture plan (which gives management anywhere from 5-20% of the company vote depending on the company by auto voting retail that opts in)All with Trump family in the backdrop raking in 1.4bn in the first year of the presidency from crypto token bullshit, asset seizures and sales, and pure graft - none of which will obviously be investigated despite Trump's son actively on a public board of directorsBigger investors!THEY NEVER REALLY CARED ABOUT VOTING ANYWAY! 96% average support for directors, 0.2% of directors globally voted out annually, and of those that are voted out (~20 a year), MORE THAN HALF STAY ON THE BOARD either by bylaw (cumulative voting) or as zombies (Jay Hoag!)And still, NO ONE CARES!WHICH ASSHOLE DO YOU BLAME: Marc Andreessen says the real crisis isn't AI job losses — it's what would have happened without AIThe powerless AI makersSam Altman: Sam Altman Says AI Will Cause Massive Deflation, Making Money Worth Vastly More - that's pretty good if you're already a billionaire, yeah?Dario Amodei: Anthropic CEO Warns That the AI Tech He's Creating Could Ravage Human Civilization - uh, don't create itThe CEO of Microsoft Suddenly Sounds Extremely Nervous About AIAI anxiety is so widespread that veteran Microsoft researchers are having panic attacks because they're making themselves obsoleteThe VC Navel Gazing Manchild EconomyAndreessen's genius was investing in manchildren: Facebook, Roblox, AirBnBVCs actually are giving LESS MONEY to women than the INCREDIBLY LOW AMOUNT they already gave during the AI raceYOU - you should have been a plumber or a peasant or a construction workerHeadliniest of the WeekDR: Cracker Barrel Wants Its Staff to Eat One Thing on Work Trips: Cracker BarrelMM: The company Americans say is the best place to work in 2026 isn't who you thinkCrew Carwash - washing cars is better than tech bro manbaby festsMM: The Worst People Alive Are Obsessed With Meta's Video Recording GlassesWho Won the Week?DR: Resistance in Minnesota and Maine (I'm attempting to be optimistic here, give me a break)MM: 33% of corporate leaders: As Big Tech CEOs speak up about violence in Minneapolis, 1 in 3 corporate leaders think ICE tensions are ‘not relevant to their business'PredictionsDR: January 1st will officially be recognized by the Business Roundtable as "Equality Day"—celebrating the grueling minutes it takes a CEO to earn more than their average worker for the year. Engraved badges with the exact time (10:10 for SBUX) will be created to honor the achievement.Ok, maybe that's silly, my real one is that Target announces its "De-Escalation" Collection: a "Minneapolis-Inspired" line of high-fashion neutral-tone hoodies, specifically marketed as "non-threatening" to ICE agents and heartbroken CEOsMM: Alex Karp, social justice warrior out for the little guy, mass fires his staff at Palantir and replaces it with an AI robot named “The Job Displacer”, does a road show claiming he's “freed” his employees using AI and now they can really have authentic jobs like “bagger at grocery store” and “guy who mixes paint”

Machine Learning Street Talk
VAEs Are Energy-Based Models? [Dr. Jeff Beck]

Machine Learning Street Talk

Play Episode Listen Later Jan 25, 2026 46:56


What makes something truly *intelligent?* Is a rock an agent? Could a perfect simulation of your brain actually *be* you? In this fascinating conversation, Dr. Jeff Beck takes us on a journey through the philosophical and technical foundations of agency, intelligence, and the future of AI.Jeff doesn't hold back on the big questions. He argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock – both execute policies that map inputs to outputs. The real distinction lies in *sophistication* – how complex are the internal computations? Does the system engage in planning and counterfactual reasoning, or is it just a lookup table that happens to give the right answers?*Key topics explored in this conversation:**The Black Box Problem of Agency* – How can we tell if something is truly planning versus just executing a pre-computed response? Jeff explains why this question is nearly impossible to answer from the outside, and why the best we can do is ask which model gives us the simplest explanation.*Energy-Based Models Explained* – A masterclass on how EBMs differ from standard neural networks. The key insight: traditional networks only optimize weights, while energy-based models optimize *both* weights and internal states – a subtle but profound distinction that connects to Bayesian inference.*Why Your Brain Might Have Evolved from Your Nose* – One of the most surprising moments in the conversation. Jeff proposes that the complex, non-smooth nature of olfactory space may have driven the evolution of our associative cortex and planning abilities.*The JEPA Revolution* – A deep dive into Yann LeCun's Joint Embedding Prediction Architecture and why learning in latent space (rather than predicting every pixel) might be the key to more robust AI representations.*AI Safety Without Skynet Fears* – Jeff takes a refreshingly grounded stance on AI risk. He's less worried about rogue superintelligences and more concerned about humans becoming "reward function selectors" – couch potatoes who just approve or reject AI outputs. His proposed solution? Use inverse reinforcement learning to derive AI goals from observed human behavior, then make *small* perturbations rather than naive commands like "end world hunger."Whether you're interested in the philosophy of mind, the technical details of modern machine learning, or just want to understand what makes intelligence *tick,* this conversation delivers insights you won't find anywhere else.---TIMESTAMPS:00:00:00 Geometric Deep Learning & Physical Symmetries00:00:56 Defining Agency: From Rocks to Planning00:05:25 The Black Box Problem & Counterfactuals00:08:45 Simulated Agency vs. Physical Reality00:12:55 Energy-Based Models & Test-Time Training00:17:30 Bayesian Inference & Free Energy00:20:07 JEPA, Latent Space, & Non-Contrastive Learning00:27:07 Evolution of Intelligence & Modular Brains00:34:00 Scientific Discovery & Automated Experimentation00:38:04 AI Safety, Enfeeblement & The Future of Work---REFERENCES:Concept:[00:00:58] Free Energy Principle (FEP)https://en.wikipedia.org/wiki/Free_energy_principle[00:06:00] Monte Carlo Tree Searchhttps://en.wikipedia.org/wiki/Monte_Carlo_tree_searchBook:[00:09:00] The Intentional Stancehttps://mitpress.mit.edu/9780262540537/the-intentional-stance/Paper:[00:13:00] A Tutorial on Energy-Based Learning (LeCun 2006)http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf[00:15:00] Auto-Encoding Variational Bayes (VAE)https://arxiv.org/abs/1312.6114[00:20:15] JEPA (Joint Embedding Prediction Architecture)https://openreview.net/forum?id=BZ5a1r-kVsf[00:22:30] The Wake-Sleep Algorithmhttps://www.cs.toronto.edu/~hinton/absps/ws.pdf---RESCRIPT:https://app.rescript.info/public/share/DJlSbJ_Qx080q315tWaqMWn3PixCQsOcM4Kf1IW9_EoPDF:https://app.rescript.info/api/public/sessions/0efec296b9b6e905/pdf

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore — Yi Tay 2

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 23, 2026 92:04


From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore — Yi Tay

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 23, 2026 92:05


From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe

Monde Numérique - Jérôme Colombain

Les voitures sans chauffeur fascinent, mais leur intelligence reste limitée. Derrière les démonstrations spectaculaires se cache une réalité technologique bien plus complexe.Les images de véhicules circulant seuls se multiplient : les robotaxis de Waymo à San Francisco, la petite voiture autonome de Zoox à Las Vegas, ou encore les démonstrations de Tesla à Paris, jusque sur la place de l'Étoile. Pourtant, ces véhicules ne sont pas totalement autonomes. Ils relèvent du niveau 4, capable de rouler sans conducteur… mais uniquement dans des zones très précises, longuement cartographiées et apprises à l'avance.Contrairement à un humain, capable de s'adapter rapidement à des environnements imprévisibles, ces voitures reposent sur des systèmes d'intelligence artificielle entraînés sur des milliers de kilomètres, sans réelle compréhension du monde. Elles peinent face aux situations ambiguës : comportements atypiques, signalisation détournée, règles tacites de circulation. Le spécialiste de l'IA Luc Julia cite par exemple un ouvrier transportant un panneau stop : là où un humain comprend la situation, la voiture autonome peut s'arrêter indéfiniment.La véritable autonomie, dite niveau 5, supposerait des véhicules capables de rouler partout, dans toutes les conditions, sans préparation préalable. Un objectif que certains jugent hors d'atteinte, à moins de repenser entièrement le modèle : infrastructures intelligentes ou nouvelles formes d'IA dites world models, capables de comprendre et d'apprendre le monde en temps réel.C'est précisément sur ces modèles que travaille le chercheur français Yann LeCun, ancien directeur scientifique de Meta, aujourd'hui à la tête d'une nouvelle start-up à Paris. De son côté, NVIDIA a présenté au salon CES de Las Vegas un nouveau système d'IA pour véhicules autonomes, baptisé Alpamayo R1, censé permettre aux voitures de raisonner face à des situations complexes.La promesse est immense, mais le chemin reste long. La voiture vraiment autonome n'est pas encore là… et son arrivée reste une question ouverte.-----------♥️ Soutien : https://mondenumerique.info/don

Crazy Wisdom
Episode #524: The 500-Year Prophecy: Why Buddhism and AI Are Colliding Right Now

Crazy Wisdom

Play Episode Listen Later Jan 19, 2026 60:49


In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Kelvin Lwin for their second conversation exploring the fascinating intersection of AI and Buddhist cosmology. Lwin brings his unique perspective as both a technologist with deep Silicon Valley experience and a serious meditation practitioner who's spent decades studying Buddhist philosophy. Together, they examine how AI development fits into ancient spiritual prophecies, discuss the dangerous allure of LLMs as potentially "asura weapons" that can mislead users, and explore verification methods for enlightenment claims in our modern digital age. The conversation ranges from technical discussions about the need for better AI compilers and world models to profound questions about humanity's role in what Lwin sees as an inevitable technological crucible that will determine our collective spiritual evolution. For more information about Kelvin's work on attention training and AI, visit his website at alin.ai. You can also join Kelvin for live meditation sessions twice daily on Clubhouse at clubhouse.com/house/neowise.Timestamps00:00 Exploring AI and Spirituality05:56 The Quest for Enlightenment Verification11:58 AI's Impact on Spirituality and Reality17:51 The 500-Year Prophecy of Buddhism23:36 The Future of AI and Business Innovation32:15 Exploring Language and Communication34:54 Programming Languages and Human Interaction36:23 AI and the Crucible of Change39:20 World Models and Physical AI41:27 The Role of Ontologies in AI44:25 The Asura and Deva: A Battle for Supremacy48:15 The Future of Humanity and AI51:08 Persuasion and the Power of LLMs55:29 Navigating the New Age of TechnologyKey Insights1. The Rarity of Polymath AI-Spirituality Perspectives: Kelvin argues that very few people are approaching AI through spiritual frameworks because it requires being a polymath with deep knowledge across multiple domains. Most people specialize in one field, and combining AI expertise with Buddhist cosmology requires significant time, resources, and academic background that few possess.2. Traditional Enlightenment Verification vs. Modern Claims: There are established methods for verifying enlightenment claims in Buddhist traditions, including adherence to the five precepts and overcoming hell rebirth through karmic resolution. Many modern Western practitioners claiming enlightenment fail these traditional tests, often changing the criteria when they can't meet the original requirements.3. The 500-Year Buddhist Prophecy and Current Timing: We are approximately 60 years into a prophesied 500-year period where enlightenment becomes possible again. This "startup phase of Buddhism revival" coincides with technological developments like the internet and AI, which are seen as integral to this spiritual renaissance rather than obstacles to it.4. LLMs as UI Solution, Not Reasoning Engine: While LLMs have solved the user interface problem of capturing human intent, they fundamentally cannot reason or make decisions due to their token-based architecture. The technology works well enough to create illusion of capability, leading people down an asymptotic path away from true solutions.5. The Need for New Programming Paradigms: Current AI development caters too much to human cognitive limitations through familiar programming structures. True advancement requires moving beyond human-readable code toward agent-generated languages that prioritize efficiency over human comprehension, similar to how compilers already translate high-level code.6. AI as Asura Weapon in Spiritual Warfare: From Buddhist cosmological perspective, AI represents an asura (demon-realm) tool that appears helpful but is fundamentally wasteful and disruptive to human consciousness. Humanity exists as the battleground between divine and demonic forces, with AI serving as a weapon that both sides employ in this cosmic conflict.7. 2029 as Critical Convergence Point: Multiple technological and spiritual trends point toward 2029 as when various systems will reach breaking points, forcing humanity to either transcend current limitations or be consumed by them. This timing aligns with both technological development curves and spiritual prophecies about transformation periods.

Tech&Co
Meta : les coulisses du départ de Yann Le Cun – 19/01

Tech&Co

Play Episode Listen Later Jan 19, 2026 26:50


Ce lundi 19 janvier, François Sorel a reçu Frédéric Simottel, journaliste BFM Business, Jean Schmitt, président et managing partner de Jolt Capital, et Luc Julia, co-créateur de Siri. Ils se sont penchés sur les vraies raisons du départ d'Yann Le Cun de Meta, l'annonce de Trump sur un possible deal entre Intel et Apple, ainsi que la signature d'un accord crucial entre les États-Unis et Taïwan, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.

Spatial Web AI Podcast
Karl Friston on Yann LeCun & Gary Marcus' Allergy to Authoritative B.S. - This & More - IWAI 2025

Spatial Web AI Podcast

Play Episode Listen Later Jan 10, 2026 104:09


This video features rare, in-depth conversations between Denise Holt, Karl Friston, and Gary Marcus, recorded at the International Workshop on Active Inference 2025 in Montreal, Canada, October 15-17, 2025. #ActiveInference #KarlFriston #GaryMarcusFilmed on site during the conference, this episode brings together two of the most influential thinkers in AI, neuroscience, and cognitive science to discuss where artificial intelligence is today and where it must go next.In an extended one-on-one interview, Karl Friston explores Active Inference, the Free Energy Principle, agency, uncertainty, and why current generative AI systems lack true understanding. Friston explains why intelligence must be grounded in generative models of the world, how uncertainty and curiosity drive real intelligence, and why Active Inference represents a first-principles approach to building adaptive,autonomous systems.The episode also includes a candid interview with Gary Marcus, one of the most prominent critics of large language models. Marcus discusses the limitations of LLMs, the dangers of hallucination and overconfidence, the need for explicit world models, and why symbolic reasoning and probabilistic structure must return to the core of AI systems. He alsoaddresses governance, energy efficiency, and why scaling data and compute alone will not lead to reliable intelligence.Finally, the video features a joint on-stagediscussion with Karl Friston and Gary Marcus, moderated by Tim Verbelen at the International Workshop on Active Inference, where they debate the future of AIbeyond deep learning, the role of uncertainty, world models, agency, and what it will take to move from pattern generation to genuine understanding.This episode was recorded live at IWAI 2025, where researchers from around the world gathered to advance Active Inference as a foundational framework for intelligence in neuroscience, robotics, and artificial systems.Topics covered in this video include:·     Karl Friston on Active Inference and the FreeEnergy Principle·     Karl Friston on Yann LeCun·    Gary Marcus on the limits of large languagemodels·     Why uncertainty is essential for intelligence·     Agency, curiosity, and information-seekingbehavior·     World models vs generative AI·     Energy efficiency and sustainability in AI·     Why scaling deep learning is not enough·     The future of adaptive, autonomous AI systemsThis conversation is essential viewing for anyone interested in the future of AI, neuroscience, cognitive science, and the next generation of intelligent systems.Recorded at the International Workshop on Active Inference 2025 in Montreal, Canada.Learn more about Active Inference AI at https://aix.us.com/#ActiveInference #KarlFriston #GaryMarcus #AIX #SeedIQ #IWAI

The Futurists
Why AI Needs JEPA World Models

The Futurists

Play Episode Listen Later Jan 8, 2026 51:38


The Futurists starts 2026 with a stimulating conversation with serial entrepreneur Matt Miesnieks, a true pioneer of AR/XR and spatial computing. In his new startup venture, Primate AI, Matt is focused on a novel approach to artificial intelligence. He intends to construct spatial and dimensional concepts that replicate the way humans develop a mental model of the real world. Topics in this episode: how the limitations of LLMs create opportunities for new approaches, such as Yann LeCun's JEPA (Joint Embedding Predictive Architecture); the distinction between trying to understand the real world and trying to generate new worlds; why it is so hard to get a robot to cross a busy street safely; why 3D world models are needed; what happens when the real world is machine-readable.

UiPath Daily
Yann LeCun: LLMs Dead End, Meta AI Wrong

UiPath Daily

Play Episode Listen Later Jan 7, 2026 8:25


Dead end LLMs Meta AI wrong Yann LeCun declares transformers incapable autonomous reasoning potently fundamentally. Statistical prison curses models lacking multisensory world understanding critically radically. Meta scientist ignites JEPA neurosymbolic revolution dismantling pattern matching dead end disruptively.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

Dead end architecture LLMs Meta Yann LeCun declares incapable autonomous reasoning planning potently fundamentally. Pattern-matching obsession chains transformers lacking biological world models critically radically. Meta scientist ignites JEPA revolution dismantling trillion-parameter illusion disruptively comprehensively.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

AI for Non-Profits
Meta AI Dead: Yann LeCun's LLM Verdict

AI for Non-Profits

Play Episode Listen Later Jan 7, 2026 8:25


Dead Meta AI Yann LeCun verdicts LLMs architecturally doomed absent world model reasoning capabilities critically. Autoregressive chains curse models statistical parrots incapable planning manipulating environments. Revolutionary call charts hierarchical perception-action loops replacing scale worship disruptively.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Lex Fridman Podcast of AI
Yann LeCun: LLMs = Dead End, Meta AI Wrong

Lex Fridman Podcast of AI

Play Episode Listen Later Jan 7, 2026 8:25


Dead end LLMs Meta AI wrong Yann LeCun declares transformers incapable true intelligence potently fundamentally. Token prediction chains curse models pattern recognition absent causal world models critically. Meta architect champions JEPA biological fidelity dismantling scale religion radically disruptively.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

This Week in Startups
2026 Starts with a bang: META AI Drama and Nvidia's $20B Groq Acquisition | E2230

This Week in Startups

Play Episode Listen Later Jan 6, 2026 54:39


This Week In Startups is made possible by:Crusoe Cloud - https://crusoe.ai/buildUber - http://uber.com/twistEvery.io - http://every.io/Today's show: Jason and Alex are BACK on TWiST for 2026! This holiday season was anything but calm, with deca-corn acquisitions, massive Polymarket bets, and major new startups breaking from stealth!Jason talks the recent Nvidia-Groq $20B acquisition, a major exit for Chamath as the lead investor back in 2017! Jason delves into how the VC fund math shapes out for pre-seed VC funds vs. Series A VC funds.Jason and Alex delve into drama swirling META's AI team. Yann LeCun, META's former Chief AI Scientist, announced that he would be leaving META to become Executive Chairman at AMI Labs. LeCun left the META team in the new year, calling the new Chief AI Scientist, Alexandr Wang, inexperienced. LeCun now looks to move AI beyond the era of LLM at AMI Labs.PLUS Jason and Alex talk about the new social media app Tangle, from Biz Stone, co-founder of Twitter, and Evan Sharp, co-founder of Pinterest. Their Startup, West Co, launched tangle, which seeks to become an “intentional living” app. The two look to improve how humans interact with modern tech. Jason points out that very few news products have worked, but is eager to see how two industry veterans build in the space. Timestamps:(00:00) Why Restaurants are OVER — Peptides and other self medications(06:41) Nvidia Acqui-Hires Groq for $20 BILLION(9:48) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(11:00) The VC fund math between seed vs. Series A funds(15:00) META buys TWiST 500 Company, Manus! Why it matters.(20:20) Uber AI Solutions: Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at http://uber.com/twist(21:24) Why Yann LeCun left META, and what could be behind it(25:27) Producer Claude on the Gondola Crash in Zurich(29:13) Jason's Request for Augmented human intelligence(30:11) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit http://every.io/(32:04) How one Trader made $436.8k on one bet on polymarket!(36:05) Jason's Predictions for 2026 IPOs(40:01) Is news broken? How Tangle is tackling it.(45:53) How much should startup incur in legal expenses? Should founders try to use AI to avoid costs?(50:59) Why Google should let NotebookLM cook, make it a standalone brand! *Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com/Check out the TWIST500: https://twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Follow Lon:X: https://x.com/lons*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm/*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis/*Thank you to our partners:(9:48) Crusoe Cloud: Crusoe is the AI factory company. Reliable infrastructure and expert support. Visit https://crusoe.ai/build to reserve your capacity for the latest GPUs today.(20:20) Uber AI Solutions: Your trusted partner to get AI to work in the real world. Book a demo with them TODAY at http://uber.com/twist(30:11) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit http://every.io/

The Marketing AI Show
#189: Is Claude AGI?, AI Change Management, Nvidia-Groq Deal, Meta Acquires Manus, Yann LeCun Speaks Out & OpenAI Preps AI Device

The Marketing AI Show

Play Episode Listen Later Jan 6, 2026 84:05


A Google principal engineer claims Claude Opus 4.5 completed a year's worth of work in a single hour. Now, the industry is grappling with a sudden, massive leap in coding capabilities that has experts warning that everything is about to change. In this week's episode, Paul Roetzer and Mike Kaput dissect the signals that we may have entered the "singularity." They explore the fallout from Yann LeCun's scorched-earth exit from Meta (including claims of "fudged" benchmarks), Sal Khan's "1% Solution" for job displacement, and NVIDIA's strategic acquisition of Groq talent. Show Notes: Access the show notes and show links here Click here to take this week's AI Pulse. Timestamps: 00:00:00 — Intro 00:04:14 — AI Pulse 00:05:41 — How Close Are We to AGI? 00:31:48 — AI Change Management 00:38:18 — OpenAI Is Hiring a “Head of Preparedness” 00:41:59 — Khan Academy Creator Calls for Job Displacement Fund 00:47:30 — Jevons Paradox in AI 00:55:20 — The Rise of Vibe Revenue 00:57:57 — Salesforce Says Trust in LLMs Is Declining 01:03:25 — Nvidia Does Landmark Deal with Groq 01:06:21 — Meta Acquires Manus 01:08:34 — Yann LeCun Speaks Out 01:14:14 — OpenAI Preps for Largely Audio-Based AI Device 01:17:39— AI Predictions for 2026 01:20:35 — OpenAI Releases Prompt Packs for ChatGPT This episode is brought to you by AI Academy by SmarterX. AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. You can get $100 off an individual purchase or a membership by using code POD100 at academy.smarterx.ai. Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy 

Doppelgänger Tech Talk
Pips Predictions 2026 & Venezuela-Angriff #525

Doppelgänger Tech Talk

Play Episode Listen Later Jan 6, 2026 112:56


Wir sind von sportlichem Ehrgeiz gepackt. Welcher Philipp drückt mehr auf der Hantelbank? Dazu sprechen wir über den Venezuela-Überfall der USA, Maduro, Ölreserven und die geopolitischen Implikationen für Trump und Russland. Microsoft macht aus Office 365 den Copilot, OpenAI und Johnny Ive bauen einen Stift, Yann LeCun verlässt Meta unzufrieden, und Grok lässt Nutzer Minderjährige per KI entkleiden. Dazu: Brookfield steigt ins Cloud-Business ein, Nvidia präsentiert Vera Rubin, und OpenAI wird zum Health-Advisor. Predictions für 2026: US-Zinsen, OpenAI und Anthropic IPOs, Energie, China-KI-Aktien, Top-Performer der Mag7, SpaceX & EchoStar, Robotics, Prediction Markets & Crypto, Hardware-Inflation,” Human-Made" als neues Bio-Siegel und viele mehr. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf ⁠⁠⁠⁠⁠doppelgaenger.io/werbung⁠⁠⁠⁠⁠. Vielen Dank!  Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Intro & Fitness (00:05:17) Venezuela (00:13:16) Nvidia Vera Rubin Plattform (00:15:33) Microsoft Office wird Copilot (00:17:40) Sam Altman & Johnny Ive: Der Stift (00:22:35) Yann LeCunn (00:26:18) Meta enttäuscht (00:31:41) Salesforce rudert zurück (00:33:19) OpenAI wird Health-Advisor (00:36:24) China reguliert KI-Freunde (00:38:46) Polymarket Venezuela Insider-Wette (00:43:35) Grok undress Feature (00:45:53) Elon Musk zurück bei Trump (00:50:19) Predictions Start: US-Zinsen unter 3% (00:54:28) KI-Crash Q1 unwahrscheinlich (00:56:24) Tax Credits für Data Center (01:00:15) OpenAI & Anthropic Trillion Dollar IPO (01:04:41) Energie wichtiger als Chips (01:07:00) Nvidia Energie (01:11:26) China KI-Aktien (01:16:26) Amazon & Google (01:20:33) AMI Labs zu Apple (01:22:00) Mira Murati zu Salesforce/SAP (01:24:25) SpaceX (01:28:38) Robotics (01:30:56) China baut AR-Brille (01:33:13) Prediction Markets vs Crypto (01:35:38) Hardware-Inflation (01:38:04) Human-Made als Bio-Siegel (01:40:22) Roll-Up-Boom 2026 (01:42:40) Anti-NGO-Kampagne (01:44:59) Deepfakes bei Landtagswahlen (01:47:22) Persuasion statt Halluzination (01:49:40) Gemini bleibt werbefrei Shownotes Fitness-Philipp - linkedin.com Nvidia startet Vera Rubin KI-Plattform auf CES 2026 - theverge.com OpenAIs mysteriöses Gerät von Jony Ive könnte ein Stift sein - in.mashable.com AI-Pionier kritisiert Meta-Manager als unerfahren - cnbc.com firmenprofil - leinummer.de Salesforce - timesofindia.indiatimes.com ChatGPT spielt Arzt - theregister.com China veröffentlicht Regelentwurf für virtuelle Begleiter - the-decoder.de Teslas Verkäufe im vierten Quartal fielen stärker als erwartet. - theverge.com Someone made $400K by predicting Maduro's capture. Here's what happened - axios.com Maduro-Sturz treibt Anleihen-Rallye - bloomberg.com Explizite Bikini-Bilder Minderjähriger - theverge.com Kannst du einen Rassist und Pädophilen aus diesem Foto entfernen? - x.com Elon Musk und Trump: Dinner mit Melania und Maduro - independent.co.uk SpaceX bietet Starlink in Venezuela kostenfrei an - heise.de

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Why “context graphs” have suddenly become one of the most important ideas in enterprise AI, and what they reveal about why agents fail or succeed at real work. This episode explains the core idea behind context graphs, how they differ from systems of record and knowledge graphs, and why capturing decision traces — the why, not just the what — may be the key to scalable autonomy inside organizations. In the headlines: AI wearables make another run at relevance, China reports early success using AI for cancer detection, X faces global backlash over Grok moderation failures, and Yann LeCun publicly breaks with Meta's AI strategy. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠http://zencoder.ai/zenflow⁠Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

In this episode, we break down why Yann LeCun is leaving Meta after admitting Llama 4 benchmarks were manipulated and internal trust collapsed. In this episode, we explain why LeCun believes large language models are a dead end and how his new startup plans to replace them with world models and a radically different vision for AI. Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle-See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

UiPath Daily
The AI Pioneer Betting Big on World Models

UiPath Daily

Play Episode Listen Later Dec 24, 2025 7:29


Yann LeCun helped shape modern AI. Now he's pushing its next evolution. We explore his latest gamble.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

TechCrunch Startups – Spoken Edition
Yann LeCun confirms his new ‘world model' startup, reportedly seeks $5B+ valuation

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Dec 22, 2025 5:25


Renowned AI scientist Yann LeCun confirmed on Thursday the worst-kept secret in the tech world: that he had indeed launched a new startup. Although he did say he will not be running the new company as its CEO. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Midjourney
Why Yann LeCun Thinks Current AI Is Missing Common Sense

Midjourney

Play Episode Listen Later Dec 20, 2025 7:29


Common sense reasoning remains elusive. World models aim to fix that. We explore the gap.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Yann LeCun Looking for $3B+ for "World Model" AI Startup

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

Play Episode Listen Later Dec 19, 2025 7:29


In this episode, we break down how Yann LeCun publicly confirmed his new “world model” AI startup and the reports that it's targeting a valuation north of $5 billion. In this episode, we explore what world models are, why this approach matters for the future of AI, and what such an ambitious valuation signals about where the AI industry is heading next.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle-See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Doppelgänger Tech Talk
KI ist keine Zahnpasta | Funding-Woche: Databricks, Waymo, OpenAI, Lovable & Yann LeCuns AMI Labs #520

Doppelgänger Tech Talk

Play Episode Listen Later Dec 19, 2025 91:08


Die Woche der Mega-Funding-Runden: Databricks wird mit 134 Milliarden bewertet, Waymo peilt 100 Milliarden an, und OpenAI soll bald 830 Milliarden wert sein. Amazon will 10 Milliarden in OpenAI investieren. Coinbase launcht Prediction Markets und tokenisierte Aktien. Revolut plant 3,5 Milliarden Profit bei 40% Marge – profitabler als die meisten Banken. Lovable aus Schweden rast auf 6,6 Milliarden. Das US-Handelsministerium droht der EU mit Vergeltung wegen Tech-Regulierung und nennt explizit SAP, Siemens und DHL. Der TikTok-Deal soll im Januar kommen – Oracle, Silver Lake und Abu Dhabi übernehmen 45%. Ein Andreessen-Startup baut synthetische KI-Influencer mit Phone Farms. Instacart steht wegen KI-Preismanipulation unter FTC-Beschuss. Yann LeCun gründet Ami Labs mit 500 Millionen Seed. Trade Republic wird mit 12,5 Milliarden bewertet. Trump Media fusioniert mit einer Fusionsenergie-Firma. Wann kommt es zur großen Doppelgänger Cola Blindverkostung? Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf ⁠⁠⁠⁠⁠doppelgaenger.io/werbung⁠⁠⁠⁠⁠. Vielen Dank!  Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Intro (00:00:49) Coinbase: Aktien & Prediction Markets (00:08:14) Databricks 134 Mrd. Bewertung (00:13:39) Revolut: 40% Profitmarge (00:19:26) Waymo 100 Mrd. Bewertung (00:36:15) Oscars wechseln zu YouTube (00:37:22) Sam Altman: KI wie Zahnpasta? (00:46:16) Lovable 6,6 Mrd. Bewertung (00:49:53) Amazon investiert 10 Mrd. in OpenAI (00:53:48) OpenAI 830 Mrd. Bewertung (00:56:55) Trump bedroht EU wegen Tech-Regulierung (01:00:56) Andreessen-Startup baut Fake-Influencer (01:08:13) Instacart KI-Preismanipulation (01:12:25) Yann LeCun gründet Ami Labs (01:14:50) TikTok-Deal im Januar (01:18:47) Trump Media fusioniert mit Fusionsfirma (01:24:14) Trade Republic 12,5 Mrd. Shownotes Coinbase Prognosemärkte Aktienhandel Stablecoins - cnbc.com Databricks sammelt Kapital bei 134-Milliarden-Bewertung - wsj.com Revolut strebt 2026 $9B Umsatz und $3.5B Gewinn an. - connectingthedotsinfin.tech Waymo plant Finanzierung bei 100-Milliarden-Bewertung - bloomberg.com Oscars wechseln 2029 von ABC zu YouTube - hollywoodreporter.com OpenAI ChatGPT verbessert Bilderstellung - bloomberg.com OpenAI-Gespräche: 10 Milliarden von Amazon für KI-Chips - theinformation.com OpenAI neue Finanzierungsrunde könnte Startup mit bis zu 83 Milliarden bewerten - wsj.com Start-up Lovable sammelt 330 Millionen ein - nytimes.com EU-Strafen für US-Tech-Unternehmen - nytimes.com Hack enthüllt a16z-unterstützte Telefonfarm, die TikTok mit KI-Influencern flutet - 404media.co FTC untersucht Instacarts KI-Preistool - reuters.com Instacart FTC Vergleich Täuschende Abrechnung - cnbc.com Seb Johnson auf X: "Metas ehemaliger Chief AI Officer sammelt €500 Mio. ein. - x.com TikTok schließt Verkauf seiner US-Einheit nach jahrelanger Saga ab. - axios.com Trump Media - ft.com Es wird kein Armut geben, universelles hohes Einkommen. - x.com Trade Republic: Zwei reiche europäische Familien beteiligen sich am Milliardendeal - manager-magazin.de Phishing-Versuch bei Outfittery: Datenleck beim Bekleidungshändler? - heise.de

The Six Five with Patrick Moorhead and Daniel Newman
EP 286: NVIDIA Earnings: Market Reactions and the Future of AI Infrastructure

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later Dec 8, 2025 59:51


On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss the latest tech news stories that made headlines. This week's handpicked topics include:  The Decode US, Saudi tout new business deals at investment forum https://www.reuters.com/world/middle-east/saudi-crown-prince-seeks-burnish-image-with-corporate-americas-top-executives-2025-11-19/ AMD, Cisco, and HUMAIN to form joint venture to deliver world-leading AI infrastructure https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2025/m11/amd-cisco-and-humain-to-form-joint-venture-to-deliver-world-leading-ai-infrastructure.html Adobe, Qualcomm partner with Humain on generative AI for Middle East https://www.reuters.com/world/middle-east/adobe-qualcomm-partner-with-humain-generative-ai-middle-east-2025-11-19/ Qualcomm to open engineering hub in Saudi Arabia, part of a series of AI deals in kingdom https://finance.yahoo.com/news/qualcomm-to-open-engineering-hub-in-saudi-arabia-part-of-a-series-of-ai-deals-in-kingdom-180008935.html Elon Musk's xAI will be first customer for Nvidia-backed data center in Saudi Arabia https://www.cnbc.com/2025/11/19/musks-xai-will-be-customer-for-nvidia-data-center-in-saudi-arabia.html Microsoft, Nvidia to invest in Anthropic as Claude maker commits $30 billion to Azure https://finance.yahoo.com/news/anthropic-commits-30-billion-microsoft-150718625.html https://blogs.nvidia.com/blog/microsoft-nvidia-anthropic-announce-partnership/ https://x.com/danielnewmanUV/status/1990802932602999149 https://x.com/danielnewmanUV/status/1990822426884682020 https://x.com/danielnewmanUV/status/1990865570242187267 Microsoft Ignite - Announcements https://azure.microsoft.com/en-us/blog/azure-at-microsoft-ignite-2025-all-the-intelligent-cloud-news-explained/ https://x.com/PatrickMoorhead/status/1990845768178282745 https://x.com/PatrickMoorhead/status/1990859751006351461 https://x.com/PatrickMoorhead/status/1990861596558774469 https://x.com/danielnewmanUV/status/1990848107223933309?s=20 Google Gemini 3 Launch https://blog.google/products/gemini/gemini-3/ https://x.com/danielnewmanUV/status/1990875878549512251 https://x.com/PatrickMoorhead/status/1991150119891223015 Yann LeCun Leaving Meta https://www.businessinsider.com/meta-ai-yann-lecun-llm-world-model-intelligence-criticism-2025-11  Pat & Dan interview with Yann LeCun at last year's Davos: https://youtu.be/0gmDufvWlWE  Cloudflare resolves outage that caused widespread internet disruptions, taking down X, ChatGPT for some users https://www.yahoo.com/news/article/cloudflare-resolves-outage-that-caused-widespread-internet-disruptions-taking-down-x-chatgpt-for-some-users-141316666.html OpenText World 2025 - Recap https://x.com/PatrickMoorhead/status/1990806348393554203 https://x.com/danielnewmanUV/status/1990805006661136469 Supercompute 2025 - Recap https://x.com/danielnewmanUV/status/1991231108646678537?s=20 The Flip: Can Google unseat OpenAI as the new benchmark of AI? (The Flip) Bulls & Bears Delayed September report shows U.S. added 119,000 jobs, more than expected; unemployment rate at 4.4% https://www.cnbc.com/2025/11/20/jobs-report-september-2025.html Fed minutes show divide over October rate cut and cast doubt about December https://www.cnbc.com/2025/11/19/fed-minutes-october-2025.html Lenovo Earnings https://news.lenovo.com/pressroom/press-releases/q2-fy-2025-26/ NVIDIA Earnings https://www.cnn.com/2025/11/19/tech/nvidia-earnings-ai-bubble-fears https://x.com/danielnewmanUV/status/1990526850171613211 https://x.com/danielnewmanUV/status/1990538832295702574 https://x.com/danielnewmanUV/status/1991156846900515130 https://x.com/PatrickMoorhead/status/1991544029675135247?s=20 https://x.com/PatrickMoorhead/status/1991540564794220778?s=20 Amazon Raises $15 Billion in First US Bond Sale in Three Years https://finance.yahoo.com/news/amazon-kicks-off-first-us-132051192.html Databricks in talk to raise at $130B valuation https://techcrunch.com/2025/11/18/databricks-reportedly-in-talks-to-raise-funding-at-a-130b-valuation/  

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs.We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal's privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world.We discuss:* How Medal's 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models* Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans* Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe* Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation* Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players* Pim's path from RuneScape private servers, Tourette's, and reverse engineering to leading a frontier world-model lab* How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent* GI's first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API* Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events* The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world—Pim* X: https://x.com/PimDeWitte* LinkedIn: https://www.linkedin.com/in/pimdw/Where to find Latent Space* X: https://x.com/latentspacepodFull Video EpisodeTimestamps00:00:00 Introduction and Medal's Gaming Data Advantage00:02:08 Exclusive Demo: Vision-Based Gaming Agents00:06:17 Action Prediction and Real-World Video Transfer00:08:41 World Models: Interactive Video Generation00:13:42 From Runescape to AI: Pim's Founder Journey00:16:45 The Research Foundations: Diamond, Genie, and SEMA00:33:03 Vinod Khosla's Largest Seed Bet Since OpenAI00:35:04 Data Moats and Why GI Stayed Independent00:38:42 Self-Teaching AI Fundamentals: The Francois Fleuret Course00:40:28 Defining World Models vs Video Generation00:41:52 Why Simulation Complexity Favors World Models00:43:30 World Labs, Yann LeCun, and the Spatial Intelligence Race00:50:08 Business Model: APIs, Agents, and Game Developer Partnerships00:58:57 From Imitation Learning to RL: Making Clips Playable01:00:15 Open Research, Academic Partnerships, and Hiring01:02:09 2030 Vision: 80 Percent of Atoms-to-Atoms AI Interactions Get full access to Latent.Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs. We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal's privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world. We discuss: How Medal's 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players Pim's path from RuneScape private servers, Tourette's, and reverse engineering to leading a frontier world-model lab How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent GI's first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world — Pim X: https://x.com/PimDeWitte LinkedIn: https://www.linkedin.com/in/pimdw/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and Medal's Gaming Data Advantage 00:02:08 Exclusive Demo: Vision-Based Gaming Agents 00:06:17 Action Prediction and Real-World Video Transfer 00:08:41 World Models: Interactive Video Generation 00:13:42 From Runescape to AI: Pim's Founder Journey 00:16:45 The Research Foundations: Diamond, Genie, and SEMA 00:33:03 Vinod Khosla's Largest Seed Bet Since OpenAI 00:35:04 Data Moats and Why GI Stayed Independent 00:38:42 Self-Teaching AI Fundamentals: The Francois Fleuret Course 00:40:28 Defining World Models vs Video Generation 00:41:52 Why Simulation Complexity Favors World Models 00:43:30 World Labs, Yann LeCun, and the Spatial Intelligence Race 00:50:08 Business Model: APIs, Agents, and Game Developer Partnerships 00:58:57 From Imitation Learning to RL: Making Clips Playable 01:00:15 Open Research, Academic Partnerships, and Hiring 01:02:09 2030 Vision: 80 Percent of Atoms-to-Atoms AI Interactions

The Seen and the Unseen - hosted by Amit Varma
Ep 432: Vasant Dhar's Lifetime in Artificial Intelligence

The Seen and the Unseen - hosted by Amit Varma

Play Episode Listen Later Dec 1, 2025 207:40


He's been working in AI since the late 1970s, and started a pioneering machine learning hedge fund in the 1990s. Now he's a professor, a podcaster, a Substacker, a yoda -- and has just written a cracking book on the subject. Vasant Dhar joins Amit Varma in episode 432 of The Seen and the Unseen to discuss the life and times of AI through the life and times of Vasant Dhar. (FOR FULL LINKED SHOW NOTES, GO TO SEENUNSEEN.IN.) Also check out: 1. Vasant Dhar on Twitter, LinkedIn, Google Scholar and NYU Stern. 2. Thinking With Machines: The Brave New World of AI -- Vasant Dhar. 3. Brave New World -- Vasant Dhar's podcast. 4. Vasant Dhar's Brave New World on Substack. 5. Brave New World — Episode 203 of The Seen and the Unseen (w Vasant Dhar). 6. Brave New World -- Aldous Huxley. 7. Death of a Salesman -- Arthur Miller. 8. Aldous Huxley interviewed by Mike Wallace. 9. Anil Seth On The Science of Consciousness – Episode 94 of Brave New World. 10. How the Mind Works -- Steven Pinker. 11. Anthony Zador on How our Brains Work — Episode 35 of Brave New World. 12. The Naked Sun -- Isaac Asimov. 13. Human and Artificial Intelligence in Healthcare — Episode 4 of Brave New World (w Eric Topol). 14. Daniel Kahneman on How Noise Hampers Judgement — Episode 21 of Brave New World. 15. The Nature of Intelligence — Episode 7 of Brave New World (w Yann LeCun). 16. Philip Tetlock on the Art of Forecasting — Episode 31 of Brave New World. 17. Superforecasting: The Art and Science of Prediction — Philip Tetlock and Dan Gardner. 18. "When you control the mail..." -- Clip from Seinfeld. 19. The Future of Liberal Education — Episode 11 of Brave New World (w Michael S Roth). 20. The Surface Area of Serendipity -- Episode 39 of Everything is Everything. 21. When Should We Trust Machines? -- Vasant Dhar's TEDx talk from 2018. 22. From Strength to Strength -- Arthur Brooks. 23. The Innovator's Dilemma -- Clayton Christensen. 24. Raghu Sundaram on Building a Great University -- Episode 88 of Brave New World. 25. Power and Prediction -- Ajay Agrawal, Joshua Gans and Avi Goldfarb. 26. The Paperclip Maximiser. 27. The Wealth of Nations -- Adam Smith. 28. The Theory of Moral Sentiments -- Adam Smith. 29. Yes Minister and Yes Prime Minister — Jonathan Lynn and Antony Jay. 30. Aswath Damodaran on Investing — Episode 33 of Brave New World. 31. The Damodaran Bot. 32. Dmitry Rinberg on the Mysteries of Smell — Episode 62 of Brave New World. 33. Alex Wiltschko on the Sense of Smell — Episode 81 of Brave New World. 34. Sandeep Robert Datta on Smell and the Brain -- Episode 90 of Brave New World. 35. Alex Wiltschko on Digitizing Scent -- Episode 97 of Brave New World. 36. A Billion Wicked Thoughts -- Ogi Ogas and Sai Gaddam. 37. Being You: A New Science of Consciousness -- Anil Seth. 38. Noise -- Daniel Kahneman, Olivier Sibony and Cass Sunstein. 39. Thinking, Fast and Slow -- Daniel Kahneman. This episode is sponsored by CTQ Compounds. Check out The Daily Reader and FutureStack. Use the code UNSEEN for Rs 2500 off. Amit Varma runs a course called Life Lessons, which aims to be a launchpad towards learning essential life skills all of you need. For more details, and to sign up, click here. Amit and Ajay Shah also bring out a weekly YouTube show, Everything is Everything. Have you watched it yet? You must! And have you read Amit's newsletter? Subscribe right away to The India Uncut Newsletter! It's free! Also check out Amit's online course, The Art of Clear Writing. Episode art: 'The Mage' by Simahina.

Rebuild
417: Remote Barista (N)

Rebuild

Play Episode Listen Later Nov 25, 2025 199:40


Naoki Hiroshima さんをゲストに迎えて、Black Friday, エスプレッソマシン、iPhone Air, AI, 自動運転などについて話しました。 Show Notes モネとフーフー / しゃべる犬と猫 Monet&Foufou Can fish feel pain? That may be the wrong question. The Camelizer New Gaggia Classic E24 The next iPhone Air has reportedly been delayed LCB(低価格高速バス) The AirPods Pro 3 Flight Problem Zohran Mamdani-Donald Trump Oval meeting: What to know Elizabeth Warren Puts Amazon On After Mass Outages: "They Are Too Big" Welcoming The Browser Company to Atlassian A judge lets Google keep Chrome but levies other penalties ChatGPT Atlas No firm is immune if AI bubble bursts, Google CEO tells BBC Meta's chief AI scientist Yann LeCun reportedly plans to leave to build his own startup This baby with a head camera helped teach an AI how kids learn language Apple's Find My enables sharing location of lost items with third parties Waymo - From the road 藤井聡太とライバル伊藤匠が繰り広げた「異次元の攻防」

The Generative AI Meetup Podcast
Gemini 3, GPT-5.1, Anti-Gravity & Yann LeCun's Exit: Are We Near AGI or Just in a Bubble?

The Generative AI Meetup Podcast

Play Episode Listen Later Nov 22, 2025 60:59


Youtube Channel: https://www.youtube.com/@GenerativeAIMeetup Mark's Travel Vlog: https://www.youtube.com/@kumajourney11 Mark's Personal Youtube Channel: https://www.youtube.com/@markkuczmarski896 Attend a live event: https://genaimeetup.com/ Shashank Linked In: https://www.linkedin.com/in/shashu10/    In this episode of the Generative AI Meetup Podcast, Mark (in Ohio) and Shashank (in India) finally sit down after a month of travel to unpack a very eventful stretch in AI. They dive into Google's new Gemini 3 Pro, its standout scores on Humanity's Last Exam and ARC-AGI, and why these reasoning benchmarks matter more than yet another near-perfect standardized test score. Mark also makes a public feature request to DeepMind: please increase Gemini's max output tokens. From there they get hands-on with the developer experience: Google's new Anti-Gravity coding IDE (and how it compares to Cursor) Using GPT-5.1 Codex High in Cursor's autonomous “plan mode” Why long context and long output windows are critical for deep research and book-length projects The conversation then shifts to the bigger picture: LLMs as therapists, sycophancy, safety, and the danger of AI always agreeing with you Mark's rant on robotics, humanoid robots, and a coming age of extreme abundance where robots handle most physical and intellectual work Why learning to code may become the mental equivalent of going to the gym—a “brain gym” in a world where AI can do most practical tasks They also cover the latest AI industry drama and milestones: Yann LeCun leaving Meta, what that might signal about Big Tech AI labs, and how godfathers like Hinton, LeCun, and Bengio see the road to AGI DeepMind's new game-playing agent and why world models in 3D environments matter for real-world robotics Genspark hitting unicorn status and what it means for “ChatGPT wrapper” startups Co-inventing a new term on air: a “narwhal” = a trillion-dollar private company If you're curious about where frontier models, coding agents, robotics, and AGI trajectories all intersect—plus some philosophical musing on jobs, meaning, and abundance—this episode is for you.

Let's Talk AI
#225 - GPT 5.1, Kimi K2 Thinking, Remote Labor Index

Let's Talk AI

Play Episode Listen Later Nov 21, 2025 78:14


Our 225th episode with a summary and discussion of last week's big AI news!Recorded on 11/16/2025Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:New AI model releases include GPT-5.1 from OpenAI and Ernie 5.0 from Baidu, each with updated features and capabilities.Self-driving technology advancements from Baidu's Apollo Go and Pony AI's IPO highlight significant progress in the automotive sector.Startup funding updates include Incept taking $50M for diffusion models, while Cursor and Gamma secure significant valuations for coding and presentation tools respectively.AI-generated content is gaining traction with songs topping charts and new marketplaces for AI-generated voices, indicating evolving trends in synthetic media.Timestamps:(00:01:19) News PreviewTools & Apps(00:02:13) OpenAI says the brand-new GPT-5.1 is ‘warmer' and has more ‘personality' options | The Verge(00:04:51) Baidu Unveils ERNIE 5.0 and a Series of AI Applications at Baidu World 2025, Ramps Up Global Push(00:07:00) ByteDance's Volcano Engine debuts coding agent at $1.3 promo price(00:08:04) Google will let users call stores, browse products, and check out using AI | The Verge(00:10:41) Fei-Fei Li's World Labs speeds up the world model race with Marble, its first commercial product | TechCrunch(00:13:30) OpenAI says it's fixed ChatGPT's em dash problem | TechCrunchApplications & Business(00:16:01) Anthropic announces $50 billion data center plan | TechCrunch(00:18:06) Baidu teases next-gen AI training, inference accelerators • The Register(00:20:50) Meta chief AI scientist Yann LeCun plans to exit and launch own start-up(00:24:41) Amazon Demands Perplexity Stop AI Tool From Making Purchases - Bloomberg(00:27:32) AI PowerPoint-killer Gamma hits $2.1B valuation, $100M ARR, founder says | TechCrunch(00:29:33) Inception raises $50 million to build diffusion models for code and text | TechCrunch(00:31:14) Coding assistant Cursor raises $2.3B 5 months after its previous round | TechCrunch(00:33:56) China's Baidu says it's running 250,000 robotaxi rides a week — same as Alphabet's Waymo(00:35:26) Driverless Tech Firm Pony AI Raises $863 Million in HK ListingProjects & Open Source(00:36:30) Moonshot's Kimi K2 Thinking emerges as leading open source AIResearch & Advancements(00:39:22) [2510.26787] Remote Labor Index: Measuring AI Automation of Remote Work(00:45:21) OpenAI Researchers Train Weight Sparse Transformers to Expose Interpretable Circuits - MarkTechPost(00:49:34) Kimi Linear: An Expressive, Efficient Attention Architecture(00:53:33) Watch Google DeepMind's new AI agent learn to play video games | The Verge(00:57:34) arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' PapersPolicy & Safety(00:59:35) Stability AI largely wins UK court battle against Getty Images over copyright and trademark | AP News(01:01:48) Court rules that OpenAI violated German copyright law; orders it to pay damages | TechCrunch(01:03:48) Microsoft's $15.2B UAE investment turns Gulf State into test case for US AI diplomacy | TechCrunchSynthetic Media & Art(01:06:39) An AI-Generated Country Song Is Topping A Billboard Chart, And That Should Infuriate Us All | Whiskey Riff(01:10:59) Xania Monet is the first AI-powered artist to debut on a Billboard airplay chart, but she likely won't be the last | CNN(01:13:34) ElevenLabs' new AI marketplace lets brands use famous voices for ads | The VergeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Sharp Tech with Ben Thompson
(Preview) How Apple Changed the Cellular Economy, What SpaceX Wants to Do With Spectrum, Airlines and Carriers, Yann LeCun Departs Meta

Sharp Tech with Ben Thompson

Play Episode Listen Later Nov 14, 2025 23:04


Andrew and Ben analyze SpaceX's nearly $20 billion in purchases by first touching on cell carrier history and the power dynamics that iPhones upended 20 years ago. Then: Understanding the SpaceX business and Musk's approach to strategy, what Starlink is trying to do with satellite internet on airlines, a power play with cell carriers that appears to have failed earlier this year, and now, a Plan B that may involve an acquisition and a bid to partner with Apple. At the end: Why Yann LeCun leaving Meta is the right outcome for both sides, a question about big companies and innovation spawns regulation cautionary tales and a cigar anecdote, and wondering about the impact of big tech on AI's future.

This Week in Google (MP3)
IM 845: Pregnant With 83 Digital Assistants - Are AIs Really Alien Minds?

This Week in Google (MP3)

Play Episode Listen Later Nov 13, 2025 169:58


Can radical optimism about AI truly shape our future, or are we stuck in a cycle of doom-and-hype? This episode features an unfiltered debate with Wired co-founder Kevin Kelly on why most fears about artificial intelligence might be missing the bigger picture. Vibe Coding' Named Word of the Year By Collins Dictionary OpenAI CFO Says Company Isn't Seeking Government Backstop, Clarifying Prior Comment Montana Becomes First State to Enshrine 'Right to Compute' Into Law - Montana Newsroom Sam Altman's Worldcoin Project Struggles Toward Billion-User Ambition With 17.5 Million Sign-Ups Meta's chief AI scientist Yann LeCun reportedly plans to leave to build his own startup Exclusive: US Army to buy 1 million drones, in major acquisition ramp-up Facebook Dating Is a Surprise Hit For the Social Network - Slashdot 12 Things I've Heard Boomers Say That I Agree With 100% The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner of the archiving site as part of a criminal investigation How Similar Are Grokipedia and Wikipedia? What We Can Learn From Brain Organoids If the US Has to Build Data Centers, Here's Where They Should Go LLM-Based Multi-Agent System for Simulating and Analyzing Marketing and Consumer Behavior No. 10's synthetic voters Tim Wu and Cory Doctorow's NPCs: Non-Player Consumers Eric Schmidt: This Is No Way to Rule a Country My torture for you Ohio State to hire 100 new faculty with AI expertise 'A frightening development': How AI-Articles are flooding the internet with fake news Internet Archive's legal fights are over, but its founder mourns what was lost YouTube TV deal reportedly hung up on ESPN pricing as Disney loses $30 million a week How people really use ChatGPT, according to 47,000 conversations shared online Tort Law museum visit Bread and Puppet Museum We're famous in Germany Brand new bridge Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Kevin Kelly Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines ventionteams.com/twit Melissa.com/twit agntcy.org

All TWiT.tv Shows (MP3)
Intelligent Machines 845: Pregnant With 83 Digital Assistants

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 13, 2025 184:10


Can radical optimism about AI truly shape our future, or are we stuck in a cycle of doom-and-hype? This episode features an unfiltered debate with Wired co-founder Kevin Kelly on why most fears about artificial intelligence might be missing the bigger picture. Vibe Coding' Named Word of the Year By Collins Dictionary OpenAI CFO Says Company Isn't Seeking Government Backstop, Clarifying Prior Comment Montana Becomes First State to Enshrine 'Right to Compute' Into Law - Montana Newsroom Sam Altman's Worldcoin Project Struggles Toward Billion-User Ambition With 17.5 Million Sign-Ups Meta's chief AI scientist Yann LeCun reportedly plans to leave to build his own startup Exclusive: US Army to buy 1 million drones, in major acquisition ramp-up Facebook Dating Is a Surprise Hit For the Social Network - Slashdot 12 Things I've Heard Boomers Say That I Agree With 100% The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner of the archiving site as part of a criminal investigation How Similar Are Grokipedia and Wikipedia? What We Can Learn From Brain Organoids If the US Has to Build Data Centers, Here's Where They Should Go LLM-Based Multi-Agent System for Simulating and Analyzing Marketing and Consumer Behavior No. 10's synthetic voters Tim Wu and Cory Doctorow's NPCs: Non-Player Consumers Eric Schmidt: This Is No Way to Rule a Country My torture for you Ohio State to hire 100 new faculty with AI expertise 'A frightening development': How AI-Articles are flooding the internet with fake news Internet Archive's legal fights are over, but its founder mourns what was lost YouTube TV deal reportedly hung up on ESPN pricing as Disney loses $30 million a week How people really use ChatGPT, according to 47,000 conversations shared online Tort Law museum visit Bread and Puppet Museum We're famous in Germany Brand new bridge Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Kevin Kelly Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines ventionteams.com/twit Melissa.com/twit agntcy.org

Radio Leo (Audio)
Intelligent Machines 845: Pregnant With 83 Digital Assistants

Radio Leo (Audio)

Play Episode Listen Later Nov 13, 2025 184:10


Can radical optimism about AI truly shape our future, or are we stuck in a cycle of doom-and-hype? This episode features an unfiltered debate with Wired co-founder Kevin Kelly on why most fears about artificial intelligence might be missing the bigger picture. Vibe Coding' Named Word of the Year By Collins Dictionary OpenAI CFO Says Company Isn't Seeking Government Backstop, Clarifying Prior Comment Montana Becomes First State to Enshrine 'Right to Compute' Into Law - Montana Newsroom Sam Altman's Worldcoin Project Struggles Toward Billion-User Ambition With 17.5 Million Sign-Ups Meta's chief AI scientist Yann LeCun reportedly plans to leave to build his own startup Exclusive: US Army to buy 1 million drones, in major acquisition ramp-up Facebook Dating Is a Surprise Hit For the Social Network - Slashdot 12 Things I've Heard Boomers Say That I Agree With 100% The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner of the archiving site as part of a criminal investigation How Similar Are Grokipedia and Wikipedia? What We Can Learn From Brain Organoids If the US Has to Build Data Centers, Here's Where They Should Go LLM-Based Multi-Agent System for Simulating and Analyzing Marketing and Consumer Behavior No. 10's synthetic voters Tim Wu and Cory Doctorow's NPCs: Non-Player Consumers Eric Schmidt: This Is No Way to Rule a Country My torture for you Ohio State to hire 100 new faculty with AI expertise 'A frightening development': How AI-Articles are flooding the internet with fake news Internet Archive's legal fights are over, but its founder mourns what was lost YouTube TV deal reportedly hung up on ESPN pricing as Disney loses $30 million a week How people really use ChatGPT, according to 47,000 conversations shared online Tort Law museum visit Bread and Puppet Museum We're famous in Germany Brand new bridge Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Kevin Kelly Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines ventionteams.com/twit Melissa.com/twit agntcy.org

This Week in Google (Video HI)
IM 845: Pregnant With 83 Digital Assistants - Are AIs Really Alien Minds?

This Week in Google (Video HI)

Play Episode Listen Later Nov 13, 2025 169:12


Can radical optimism about AI truly shape our future, or are we stuck in a cycle of doom-and-hype? This episode features an unfiltered debate with Wired co-founder Kevin Kelly on why most fears about artificial intelligence might be missing the bigger picture. Vibe Coding' Named Word of the Year By Collins Dictionary OpenAI CFO Says Company Isn't Seeking Government Backstop, Clarifying Prior Comment Montana Becomes First State to Enshrine 'Right to Compute' Into Law - Montana Newsroom Sam Altman's Worldcoin Project Struggles Toward Billion-User Ambition With 17.5 Million Sign-Ups Meta's chief AI scientist Yann LeCun reportedly plans to leave to build his own startup Exclusive: US Army to buy 1 million drones, in major acquisition ramp-up Facebook Dating Is a Surprise Hit For the Social Network - Slashdot 12 Things I've Heard Boomers Say That I Agree With 100% The FBI has subpoenaed the domain registrar of archive.today, demanding information about the owner of the archiving site as part of a criminal investigation How Similar Are Grokipedia and Wikipedia? What We Can Learn From Brain Organoids If the US Has to Build Data Centers, Here's Where They Should Go LLM-Based Multi-Agent System for Simulating and Analyzing Marketing and Consumer Behavior No. 10's synthetic voters Tim Wu and Cory Doctorow's NPCs: Non-Player Consumers Eric Schmidt: This Is No Way to Rule a Country My torture for you Ohio State to hire 100 new faculty with AI expertise 'A frightening development': How AI-Articles are flooding the internet with fake news Internet Archive's legal fights are over, but its founder mourns what was lost YouTube TV deal reportedly hung up on ESPN pricing as Disney loses $30 million a week How people really use ChatGPT, according to 47,000 conversations shared online Tort Law museum visit Bread and Puppet Museum We're famous in Germany Brand new bridge Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Kevin Kelly Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines ventionteams.com/twit Melissa.com/twit agntcy.org

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Today on the AI Daily Brief, NLW covers two stories that may signal a major shift in the AI landscape: Yann LeCun's departure from Meta and Dr. Fei-Fei Li's new argument that spatial intelligence and world models—not just LLMs—will define the next era of AI, exploring what world models actually are, why some researchers think they're essential for robotics, science, and creativity, and how this connects to Meta's internal reorg. Plus in the headlines: Eleven Labs' celebrity voice marketplace, SoftBank's Nvidia liquidation, AMD's push to challenge Nvidia, Blue Owl's $3B Stargate investment, and the surprising surge in Meta AI traffic.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Rovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - ⁠⁠⁠⁠⁠⁠⁠https://rovo.com/⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Blitzy.com - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai

Techmeme Ride Home
The iPhone Air Isn't Selling

Techmeme Ride Home

Play Episode Listen Later Nov 11, 2025 21:00


The iPhone Air isn't selling, and it isn't selling to the degree that Apple is delaying the next version. Yan LeCun is gonna strike out on his own. The big illegal streaming site takedown you might not have hear about. And Facebook doesn't like likes anymore, at least not external likes. Apple Delays Release of Next iPhone Air Amid Weak Sales (The Information) Meta chief AI scientist Yann LeCun plans to exit and launch own start-up (FT) SoftBank sells Nvidia stake for $5.8bn as it prepares for AI investments (FT) Anthropic Is on Track to Turn a Profit Much Faster Than OpenAI (WSJ) Streameast: How the authorities took down the world's largest illegal sports streaming platform (The Athletic) Meta is killing off the external Facebook Like button (Engadget) Learn more about your ad choices. Visit megaphone.fm/adchoices

Danny In The Valley
Who holds the power in the age of AI?

Danny In The Valley

Play Episode Listen Later Nov 7, 2025 36:40


Katie meets Jensen Huang and Meta's chief AI scientist Yann LeCun at No.10 as Nvidia hits a $5 trillion valuation. Plus, Danny and Katie discuss OpenAI's $38 billion AWS deal and the Sam Altman–Satya Nadella interview, exploring what it all means for AI's power, compute and future. And Katie reveals the tech-inspired Collins' Word of the Year – any guesses?Clips: BG2 Pod Hosted on Acast. See acast.com/privacy for more information.