POPULARITY
Categories
Anthropic accuses Chinese AI labs of “industrial scale” distillation attacks on its Claude models, and the panel breaks down allegations involving DeepSeek, Moonshot, and MiniMax, 24,000 fraudulent accounts, and 16 million exchanges, as they debate intellectual property theft, Nvidia chip export controls, and whether AI competition with China is now a full-blown national security battle.
Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
Did China steal Anthropic's AI powers? Well, that's the shocking bombshell report that Anthropic just dropped. They accused multiple Chinese AI companies of generating more than 16 million exchanges with their models just to try and copy it. We get what you're thinking….. “So. That means cheaper open Chinese models so we all win, right.” Wrong. On this episode of Everyday AI, we break down Anthropic's shocking AI distillation accusations against Chinese firms, what they actually mean, and how they're more impactful outside of just the AI model you choose to use. You might be shocked TBH at the far-reaching implications. China Stealing AI from the U.S.? Inside Anthropic's Bombshell Allegations — An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic Accuses Chinese AI Labs of DistillationDetails on 16 Million Claude Extraction PromptsDeepSeek, Moonshot, MiniMax Named in Anthropic ReportGoogle and OpenAI Cite Similar China AI ThreatsTechnical Explanation of Model Distillation AttacksMarket Impact: MiniMax Surpasses Anthropic in TokensFinancial Consequences for U.S. AI Model ProvidersPolicy and Geopolitical AI Competition AnalysisLimitations of Current Export Controls and SafeguardsU.S. AI Dominance Threatened by Chinese DistillationTimestamps:00:00 "Foreign AI Impact on Tech"04:43 "AI Distillation and Security Threats"07:11 "MiniMax Scandal: Data Theft Allegations"10:33 "Open Router Key Marketplace"15:54 "Smart, Cheap Models Explained"19:31 "AI, IP Theft, and China's Impact"22:44 Big Tech's Data Theft Problem25:37 "Protecting U.S. Tech from Export"27:53 OpenAI Accuses DeepSeq of Misuse33:58 "AI's Global Power Struggle"35:03 "AI Models: What's Next?"Keywords: China AI theft, Anthropic bombshell report, model distillation, Chinese AI labs, DeepSeek, Moonshot AI, MiniMax, Claude capabilities, 16,000,000 prompts, 24,000 fake accounts, OSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com
Send a textThis week on Leave Your Mark, I sit down with financial strategist and philanthropic visionary Mark Halpern.Mark is a Certified Financial Planner, Trust & Estate Practitioner, and Master Financial Advisor–Philanthropy with more than 30 years of experience helping successful families and entrepreneurs think differently about wealth.As CEO of WEALTHinsurance.com, he operates at the intersection of estate planning, tax strategy, and charitable giving—helping clients preserve capital, minimize tax, and multiply impact.But this conversation goes deeper than money.Mark was mentored by the late Dr. Paul Goldstein, a Holocaust survivor who earned his PhD at 87. From him, Mark learned a principle that shapes both his life and his work: when you get knocked down, get back up before the count of ten.Now he's pursuing a bold “Moonshot” goal: generating $1 billion in new charitable donations annually.We talk about legacy.We talk about resilience.We talk about a new model of philanthropy—one that challenges traditional thinking about wealth and responsibility.If you've ever wondered what wealth is really for, this episode will make you think.If you liked this EP, please take the time to rate and comment, share with a friend, and connect with us on social channels IG @Kingopain, TW @BuiltbyScott, LI+FB Scott Livingston. You can find all things LYM at www.LYMLab.com, download your free Life Lab Starter Kit today and get busy living https://lymlab.com/free-lym-lab-starter/Please take the time to visit and connect with our sponsors, they are an essential part of our success:www.ReconditioningHQ.comwww.FreePainGuide.com
0:00 - Moser decides to dig up/re-hash his long standing beef with Christian Bale. Don't shatter his dreams!After that...the Avalanche are back in action tomorrow night on the road in SLC vs the Mammoth. Which players from the Olympics will be back on the ice for Colorado? Will any guys get some rest before the final push to the playoffs? What can we expect from this back half of the season?14:47 - Think about how far the Broncos have come in such a short amount of time. Their turnaround has been meteoric. Somehow, the Russell Wilson/Nathaniel Hackett era feels like a lifetime ago. And yet, there are still more tweaks to the roster that need to be made. 36:35 - Yesterday, Zac Veen hit a walkoff MOONSHOT home run for the Rockies in spring training. And he looks HUGE. He's more than 40 lbs heavier than he was last season. Hey man, if that's what it takes to hit dingers, then we're totally here for Jacked Veen.
-The US Department of Defense has reportedly reached a deal to use Elon Musk's Grok in its classified systems. That's according to a report by Axios. That follows news that the Pentagon is currently in a dispute with another AI company, Anthropic, over limits on its technology for things like mass surveillance. -Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns…to illicitly extract Claude's capabilities to improve their own models." -Bungie isn't taking any prisoners when it comes to cheating on its upcoming extraction shooter, Marathon. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Jim Love hosts Hashtag Trending, and highlights updates to TechNewsDay.ca/.com including a new "Best of YouTube" section for curated tech channels. Anthropic alleges three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—ran industrial-scale distillation campaigns to extract capabilities from Claude models using proxy services and "Hydra cluster" networks with tens of thousands of fraudulent accounts, prompting Anthropic to strengthen identity controls and detection with cloud partners. Amazon shares fall for nine straight sessions after investors react to plans for roughly $200B in 2026 capex largely for AI infrastructure, raising questions about ROI and future free cash flow. A cited analysis by YouTuber Nate B Jones argues Google's Gemini 3.1 Pro signals a strategy shift toward deeper reasoning (not just coding/agentic tools), noting a 77.1% ARC-AGI-2 score and DeepMind's scientific problem focus, contrasting OpenAI's product/distribution, Anthropic's agentic coordination, and Google's "pure intelligence" approach. The episode also references Citri Research's 2028 scenario planning report outlining a plausible fast-arriving AGI chain reaction—falling inference costs, rapid adoption, labor displacement pressure, and geopolitical competition for compute and talent—and promotes the Saturday show Project Synapse on long-term AI trajectories. Finally, Love discusses Sam Altman's comments at the India AI Impact Summit dismissing viral claims about ChatGPT water and energy use without providing specific counter-numbers, noting growing public backlash as data center water and electricity demands rise; the full interview is linked in show notes. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt LINKS Nate B Jones on Google Gemimi 3.1 https://youtu.be/8jKAT8GNDE0?si=Rz5k1gP0sS9H7XAp Sam Altman's speach https://www.youtube.com/live/qH7thwrCluM?si=IO_76NsGJ1zgt8J7 AI Scenario https://www.citriniresearch.com/p/2028gic 00:00 Headlines and intro 00:54 Site updates and YouTube picks 01:57 Anthropic warns of distillation 04:58 Amazon AI spending jitters 06:13 Google bets on reasoning 10:31 2028 AGI crisis scenario 11:55 Altman backlash and resources 14:17 Wrap up and sponsor thanks
Anthropic accuses DeepSeek, Moonshot, and MiniMax of using 24,000 fake accounts to distill Claude's AI capabilities, as U.S. officials debate export controls aimed at slowing China's AI progress. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Moonshots host Peter Diamandis speaks with Ben Horowitz, cofounder and general partner at a16z, alongside regular cohosts Salim Ismail, Dave Blundin, and Dr. Alexander Wissner-Gross, about whether AI can or should be paused, what happened when Horowitz told a Biden administration official that regulating AI means regulating math, why crypto is the natural money for AI agents, and why the gap between AI capability and societal adoption may be wider than people think. This episode originally aired on Peter Diamandis's Moonshots podcast. Follow Peter H. Diamandis on X: https://x.com/PeterDiamandis Follow Ben Horowitz on X: https://twitter.com/bhorowitz Follow Salim Ismail on X: https://twitter.com/salimismail Follow Dave Blundin on X: https://twitter.com/DavidBlundin Follow Dr. Alexander Wissner-Gross on X: https://twitter.com/alexwg Listen to Moonshots: https://www.youtube.com/@peterdiamandis Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In dieser Folge diskutieren zwei OKR-Expertinnen die Fragen, die man nicht mal eben googeln kann: Braucht ein gutes Objective wirklich eine Zahl? Warum funktionieren OKR-Weeklys oft anders als gedacht? Und was steckt wirklich hinter dem Hype um Moonshots? Statt schneller Antworten geht es um die Graubereiche: um Entscheidungen im Alltag, um unterschiedliche Sichtweisen – und um die Punkte, an denen OKR in der Praxis herausfordernder ist, als es in der Theorie scheint. Eine Folge für alle, die OKR nicht nur „anwenden“, sondern besser verstehen wollen – mit allem, was dazugehört.
Join us for our Bio Hacking series, where we take a look at Wim Hoff, the IceMan himself, and how the breath knows how to go deeper than the mind.Wim has become known as ‘The Iceman' for his astounding physical feats, such as spending hours in freezing waters and running barefoot marathons over deserts and ice fields. Join us to discover how he uses his mind to overcome his body's physical limitations.Get The Hof Method from Amazonhttps://geni.us/TheWimHofMethod★ Become a Moonshot Member ★ Support this podcast on Patreon ★Moonshots Podcast is creating a podcast to help improve yourself, your thinking, and your leadership https://www.patreon.com/Moonshots
In dieser Deep-Dive-Folge spricht Markus mit zwei der spannendsten Köpfe aus der europäischen AI- und Developer-Szene: Mario Zechner und Armin Ronacher. Beide sind zentrale Figuren im entstehenden Agentic-AI-Ökosystem rund um OpenClaw und Pi – und kommen aus Österreich.Wir reden darüber, wie Pi als minimaler Agent-Harness funktioniert und warum es zur Grundlage für OpenClaw wurde, wie „Normies" plötzlich programmieren können, was das für die Identität von Entwicklern bedeutet – und ob händisches Programmieren damit „tot" ist.Außerdem geht's um:die persönlichen Storys von Mario (Games, Machine Learning, Exit zu Microsoft) und Armin (Ubuntu-Community, Jinja, Flask, Sentry)die turbulenten Wochen nach Peters OpenClaw-Erfolg und seinem Wechsel zu OpenAIEuropas strukturelle Probleme: Kammern, Gewerbeordnung, Bürokratie – und warum es trotzdem Sinn macht, hier zu bauendie Polarisierung rund um Peters Armin-Wolf-Interview, Arbeitszeit & Arbeitnehmerrechtedie Frage, wie junge Entwickler*innen noch Software-Engineering lernen, wenn AI den Code schreibtAm Ende gibt's wie immer unsere Speed Round mit Learnings, Lifehacks, Buchempfehlungen und Moonshots.Production: Hanna Moser Musik (Intro/Outro): www.sebastianegger.com
NASA has released findings from a report by the Program Investigation Team examining the Boeing CST-100 Starliner Crewed Flight Test. NASA completed a successful wet dress rehearsal for the SLS rocket that'll be used for the Artemis II mission. MDA Space has launched a wholly-owned subsidiary exclusively focused on delivering mission-critical capabilities for Canada's national defense priorities outside the space domain, and more. Remember to leave us a 5-star rating and review in your favorite podcast app. Be sure to follow T-Minus on LinkedIn and Instagram. T-Minus Guest Maria Varmazis and Alice Carruth wrap up the last daily T-Minus show. Selected Reading NASA Releases Report on Starliner Crewed Flight Test Investigation NASA Begins Artemis II Launch Pad Ops After Successful Fuel Test MDA Space Launches 49North, a Canadian defence business delivering multi-domain and mission-critical capabilities Axelspace Secures Japan Ministry of Defense Satellite Constellation Project SpaceX launches second Falcon 9 rocket to return to a landing in The Bahamas – Spaceflight Now Share your feedback. What do you think about T-Minus Space Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at space@n2k.com to request more info. Want to join us for an interview? Please send your pitch to space-editor@n2k.com and include your name, affiliation, and topic proposal. T-Minus is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
The mates do a live Moonshots episode and discuss OpenAI's acquisition of Openclaw, 400x cost reduction on ARC-AGI-1, and the AI Talent War Read the Solve Everything Paper: https://solveeverything.org/ Get notified once we go live during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Youtube Listen to MOONSHOTS: Apple YouTube – *Recorded on February 10th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Learning Can't Wait, host Hayley Spira-Bauer speaks with Lucy Martin of the Children's Literacy Project about the power of storytelling to expose—and solve—the literacy crisis in America. Lucy shares her unconventional path from creative work in music, comedy, and film into documentary filmmaking focused on literacy as a justice and poverty issue. The conversation centers on Sentenced, a feature-length documentary that tells the story of adults living with illiteracy and its generational consequences, as well as the Moonshot series, which highlights districts and states—like Mississippi—that have made meaningful, systemic progress in literacy. Together, they explore why stories change culture in ways data alone cannot, how communities beyond schools can play a role, and why literacy is one of the most solvable and urgent challenges facing our education system today.
In this episode of Mining Stock Daily, technical analyst Kevin Wadsworth returns to discuss the massive "capital rotation event" currently unfolding between growth stocks and hard assets. Wadsworth examines the significance of the Dow Jones surpassing 50,000, questioning whether current market levels are being treated as a matter of national security to prevent a significant downturn. Using a proprietary "weight of evidence" matrix, he illustrates how key indicators like the U.S. dollar and money supply have entered a definitive bear market when priced in gold. Kevin details how current conditions mirror rare historical precedents from 1929, the 1970s, and the early 2000s, all of which led to a decade or more of stock market underperformance. Listeners will learn why Wadsworth anticipates significant market weakness by Q3 or Q4, potentially leading to a 50% drop in the S&P 500 if critical support levels are breached. The conversation highlights a generational opportunity in gold and silver, with silver potentially reaching targets as high as $250 per ounce as it completes a massive 46-year cup and handle pattern. Finally, Wadsworth provides a sobering technical look at Bitcoin, noting its declining momentum and severe underperformance relative to gold over the recent cycle._____TerraHutton empowers junior mining companies to secure investment with immersive, interactive, and visually striking storytelling. Learn more about the TerraHutton platform HERE______This episode of Mining Stock Daily is brought to you by... Revival Gold is one of the largest pure gold mine developer operating in the United States. The Company is advancing the Mercur Gold Project in Utah and mine permitting preparations and ongoing exploration at the Beartrack-Arnett Gold Project located in Idaho. Revival Gold is listed on the TSX Venture Exchange under the ticker symbol “RVG” and trades on the OTCQX Market under the ticker symbol “RVLGF”. Learn more about the company at revival-dash-gold.comVizsla Silver is focused on becoming one of the world's largest single-asset silver producers through the exploration and development of the 100% owned Panuco-Copala silver-gold district in Sinaloa, Mexico. The company consolidated this historic district in 2019 and has now completed over 325,000 meters of drilling. The company has the world's largest, undeveloped high-grade silver resource. Learn more at https://vizslasilvercorp.com/Equinox has recently completed the business combination with Calibre Mining to create an Americas-focused diversified gold producer with a portfolio of mines in five countries, anchored by two high-profile, long-life Canadian gold mines, Greenstone and Valentine. Learn more about the business and its operations at equinoxgold.com Integra Resources is a growing precious metals producer in the Great Basin of the Western United States. Integra is focused on demonstrating profitability and operational excellence at its principal operating asset, the Florida Canyon Mine, located in Nevada. In addition, Integra is committed to advancing its flagship development-stage heap leach projects: the past producing DeLamar Project located in southwestern Idaho, and the Nevada North Project located in western Nevada. Learn more about the business and their high industry standards over at integraresources.com
Have televised confessions quelled protests in Iran? Why has Elon Musk turned from Mars to the Moon? And will the BBC prove to be a puzzles champ? Olly Mann and The Week delve behind the headlines and debate what really matters from the past seven days. With Felicity Capon, Arion McNicoll and Harriet Marsden.Image credit: Atta Kenare / AFP / Getty Images
Ubisoft employees are prepared to strike over a recent return-to-office mandate and extreme cost-cutting. Also: Circana Report on U.S. Video Game Spending for December 2025 and full year 2025, That's No Moon at odds with former founder, Amazon Games suffers another blow, and Ashes of Creation dev Intrepid implodes. You can support Virtual Economy's growth via our Ko-Fi and also purchase Virtual Economy merchandise! TIME STAMPS [00:01:04] - Circana Report on U.S. Video Game Spending for December 2025 and Full-Year 2025 [00:12:00] - That's No Moon Co-Founder Causes $1M Problem [00:21:09] - Valve Faces Down Another Class Action Suit [00:29:41] - Ubisoft in Disarray [00:52:38] - Investment Interlude [00:57:23] - Quick Hits [01:02:13] - Labor Report SOURCES That's No Moon Ex-CEO Hijacked Domain And Caused A $1M Problem, Cofounders Say In Lawsuit | Aftermath Valve Fails To Get $900m Class Action Lawsuit Over Pricing Thrown Out | Kotaku UBISOFT ANNOUNCES A MAJOR ORGANIZATIONAL, OPERATIONAL AND PORTFOLIO RESET TO RECLAIM CREATIVE LEADERSHIP AND RESTORE SUSTAINABLE GROWTH | Ubisoft Ubisoft Employee Claims He's Placed on Unpaid Suspension for Criticising Return-to-Office Policy | Insider Gaming Ubisoft Fires Team Lead For Criticising Stupid Return-To-Office Mandate | Aftermath Ex-Assassin's Creed boss Marc-Alexis Côté sues Ubisoft after his surprise and "disguised dismissal" | Eurogamer We are shutting it down:' Ubisoft unions call for international strike | Game Developer INVESTMENT INTERLUDE Industry networking platform MeetToMatch acquired by 1SP Agency | GamesIndustry Indie publisher Pixel Doors launches with a 'developer-first' mindset | Game Developer LABOR REPORT Update on our organization | Amazon Christoph Hartmann Out at Amazon | Jason Schreier on Bluesky Report: ProbablyMonsters lays off devs while unveiling Nazi hunting game | Game Developer Wildgate maker Moonshot laying off unspecified number of staff | GamesIndustry Update: Payday developer Starbreeze pushes ahead with 'new wave' of layoffs | Game Developer Report: NetEase Games is cutting jobs in Montreal | Game Developer Studio that developed the infamous Mighty No.9 has shut down for good | Eurogamer Anime action RPG studio Pahdo Labs shuts down despite accruing $17.5M in funding: 'We believed making a demo of a familiar but new game would be our best shot' | PCGamer Tencent subsidiary Sumo Digital is making layoffs | Game Developer MMORPG Ashes of Creation Suddenly Implodes 52 Days After Steam Early Access Launch | WCCFTech Ashes Of Creation Dev Details $3.2 Million Kickstarter Studio's Shocking Collapse: 'None Of Us Are Receiving Our Final Paychecks' | Kotaku Blizzard QA Workers' Newly Ratified Union Contract Locks In Pay Increases, Better Benefits | Aftermath
When everything feels uncertain, strong leaders don't tighten their grip—they strengthen their foundation.In Show 284, Mike and Mark unpack Brené Brown's concept of Strong Ground and why clarity, values, and accountability matter more than ever. You'll learn how courageous leaders navigate uncertainty, how the Playbacktechnique creates instant alignment, and why most teams struggle with communication in the first place.If leadership feels harder right now, this episode offers a grounded path forward.Strong Ground as the foundation for resilient leadershipCourage as a skill set, not a personality traitNavigating uncertainty with clarity and valuesAlignment before actionCommunication built on discipline and accountabilityStrong Ground vs. Inefficient MusclesBrené Brown's metaphor highlights how leaders overuse urgency, control, or reassurance when they lack a stable core. Strong Ground provides the balance and strength needed to lead calmly under pressure.The Four Skill Sets of CourageRevisiting Dare to Lead, courage is framed as a practice leaders can learn—enabling honest conversations, clear boundaries, and consistent accountability.Diving Into UncertaintyAvoiding discomfort weakens leadership. This segment explores how grounded leaders move toward uncertainty with confidence and curiosity.The Playback TechniquePlayback ensures absolute clarity before moving forward. By restating agreements, teams eliminate confusion and prevent misalignment.Why We Suck at CommunicationCommunication failures stem from a lack of clarity, discipline, and accountability—not from a lack of effort or intelligence.Strong Ground Check: Re-anchor decisions to values during changePlayback Before Action: Confirm shared understanding explicitlyCourage Practice: Treat bravery as a trainable skillClarity First: Slow conversations to speed executionAccountability Loops: Define ownership and outcomes clearlyGet the book on Amazon https://geni.us/OMvekTRecognize when you're overcompensating instead of grounding yourselfApply Brené Brown's courage framework to real leadership challengesUse Playback to eliminate confusion and build trustImprove communication through clarity and disciplineLead with steadiness, even when the environment is unstableExpanded Concepts & InsightsKey ThemesConcepts & BreakthroughsHabits, Tools & Mental ModelsListener TakeawaysBecome a Member of the Moonshots Podcast:https://www.patreon.com/Moonshots
Recorded live at Apollo House 2026, this fireside chat captures a candid, in-the-room conversation between StartUp Health's Unity Stoakes and entrepreneur, investor, and technologist Vinod Khosla of Khosla Ventures on AI's accelerating impact on healthcare. Khosla explains why he believes AI is a platform shift larger than the internet or mobile, and how that shift could unlock access to high-quality care for billions. The conversation explores his “AI intern” model for healthcare, why copilots often underperform in complex clinical work, and why trust, supervision, and fit-for-purpose guardrails are essential. A live exchange with Esther Dyson adds perspective on empathy, communication, and the enduring human dimensions of care. As a live recording, the audio reflects the energy of the room rather than a studio setting. Do you want to participate in live conversations with industry luminaries? When you join the StartUp Health Network – a new private community for investors, buyers, and industry leaders to connect year-round with top health entrepreneurs – you are invited to a full calendar of interactive Fireside Chats with the most influential leaders shaping health innovation. Come with questions, learn what is working right now, and connect with industry icons. » Learn more and join today. Want more content like this? Sign up for StartUp Health Insider™ to get funding insights, news, and special updates delivered to your inbox.
Our 233rd episode with a summary and discussion of last week's big AI news!Recorded on 01/30/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Google introduces Gemini AI agent in Chrome for advanced browser functionality, including auto-browsing for pro and ultra subscribers.OpenAI releases ChatGPT Translator and Prism, expanding its applications beyond core business to language translation and scientific research assistance.Significant funding rounds and valuations achieved by startups Recursive and New Rofo, focusing on specialized AI chips and optical processors respectively.Political and social issues, including violence in Minnesota, prompt tech leaders in AI like Ade from Anthropic and Jeff Dean from Google to express concerns about the current administration's actions.Timestamps:(00:00:10) Intro / BanterTools & Apps(00:04:09) Google adds Gemini AI-powered ‘auto browse' to Chrome | The Verge(00:07:11) Users flock to open source Moltbot for always-on AI, despite major risks - Ars Technica(00:13:25) Google Brings Genie 3 'World Building' Experiment to AI Ultra Subscribers - CNET(00:16:17) OpenAI's ChatGPT translator challenges Google Translate | The Verge(00:18:27) OpenAI launches Prism, a new AI workspace for scientists | TechCrunchApplications & Business(00:19:49) Exclusive: China gives nod to ByteDance, Alibaba and Tencent to buy Nvidia's H200 chips - sources | Reuters(00:22:55) AI chip startup Ricursive hits $4B valuation 2 months after launch(00:24:38) AI Startup Recursive in Funding Talks at $4 Billion Valuation - Bloomberg(00:27:30) Flapping Airplanes and the promise of research-driven AI | TechCrunch(00:31:54) From invisibility cloaks to AI chips: Neurophos raises $110M to build tiny optical processors for inferencing | TechCrunchProjects & Open Source(00:35:34) Qwen3-Max-Thinking debuts with focus on hard math, code(00:38:26) China's Moonshot releases a new open-source model Kimi K2.5 and a coding agent | TechCrunch(00:46:00) Ai2 launches family of open-source AI developer agents that adapt to any codebase - SiliconANGLE(00:47:46) Tiny startup Arcee AI built a 400B-parameter open source LLM from scratch to best Meta's LlamaResearch & Advancements(00:52:53) Post-LayerNorm Is Back: Stable, ExpressivE, and Deep(00:58:00) [2601.19897] Self-Distillation Enables Continual Learning(01:03:04) [2601.20802] Reinforcement Learning via Self-Distillation(01:05:58) Teaching Models to Teach Themselves: Reasoning at the Edge of LearnabilityPolicy & Safety(01:09:13) Amodei, Hoffman Join Tech Workers Decrying Minnesota Violence - BloombergSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What if 5 to 10% of your marketing budget could create a moment so big it actually changes how people feel about your brand? In this episode of Marketing People Love, I sit down with Rabah Rahil, one of the most creative marketers I have ever worked with and the mind behind some of the most memorable moments in modern B2B and SaaS marketing. From helping Triple Whale raise $25M and creating the Whaley's Awards to launching Fermat's SaaS couture drops that took over the internet, Rabah shares how to design marketing people genuinely want to be part of. We talk about moonshot marketing and why small, intentional bets can drive outsized impact when your fundamentals are solid. Rabah breaks down what it really takes to build community through events and experiences, how high FOMO moments are engineered, and why the pre build up, the live experience, and the post event momentum all matter. We also get into why most brand merch fails, how Fermat flipped the script by creating products people actually wanted, and where art, science, and humanity intersect in great marketing. If you want to think bigger, take smarter risks, and create marketing people actually love, this episode will change how you see what is possible.
Katherine Johnson was a mathematician whose calculations helped send astronauts to the moon. Despite facing racism and sexism, she broke barriers at NASA and proved that with determination, you can reach for the stars. This podcast is a production of Rebel Girls. It's based on the book series Good Night Stories for Rebel Girls. This episode was narrated by Nicole Pringle. It was written and produced by Danielle Roth, and edited by Haley Dapkus. Direction by Ashton Carter. Sound design and mixing by Carter Wogahn. Fact checking by Sam Gebauer. Our production coordinator was Natalie Hara. Haley Dapkus was our senior producer. Our executive producers were Anjelika Temple and Jes Wolfe.Original theme music was composed and performed by Elettra Bargiacchi.A special thanks to the whole Rebel Girls team, who make this podcast possible! Until next time, stay rebel!
This is a repost from the Moonshots archive featuring Elizabeth Gilbert and her book Big Magic.We're sharing it as a companion to the work of Brené Brown — and ahead of our upcoming episode on Strong Ground.Big Magic is a reminder that creativity doesn't require fearlessness — only curiosity, courage, and the willingness to show up.In this episode of the Moonshots Podcast, hosts Mike and Mark dive into the enchanting world of Big Magic by Elizabeth Gilbert. This book has inspired countless creatives to live beyond fear and embr ace the magic of creativity. Whether you're an artist, writer, or someone looking to infuse more creativity into your life, this episode offers a treasure trove of insights and practical advice.Listen and Learn More:• Listen to the Episode: Episode 143 – Elizabeth Gilbert: Big Magic• Watch on YouTube: Big Magic by Elizabeth Gilbert | Book Summary• Read a Summary: Creative Living Beyond Fear – Elizabeth Gilbert | Book Summary• Become a Member: Support the Moonshots Podcast on PatreonEpisode Highlights: • What is Big Magic? • Discover the essence of Big Magic and how Elizabeth Gilbert views creative inspiration as a mysterious force that calls us to engage with it. • Lessons on Confidence: • Learn why fear shouldn't stop you from creating and how permitting yourself to fail can lead to unexpected breakthroughs. • Explore getting comfortable with your fears rather than overcoming them entirely. • Lessons on Creating: • Understand the difference between originality and authenticity and why Gilbert champions the latter as the key to meaningful creative work. • Find out why finishing your creative projects, even imperfect, is more important than striving for unattainable perfection. • Final Takeaways: • Mike and Mark wrap up the episode by discussing embracing your inner creative trickster and why taking yourself too seriously might be the most significant barrier to your creative success.Why You Should Listen:This episode will resonate deeply if you've ever struggled with fear, self-doubt, or the pressure to be perfect. Gilbert's approach to creativity is liberating and empowering, reminding us that the journey is just as important as the destination.Listen and Learn More: • Listen to the Episode: Episode 143 – Elizabeth Gilbert: Big Magic • Watch on YouTube: Big Magic by Elizabeth Gilbert | Book Summary • Read a Summary: Creative Living Beyond Fear – Elizabeth Gilbert | Book Summary• Become a Member: Support the Moonshots Podcast on Patreon
In this first episode of "LA Made: The Other Moonshot": America aims for the moon. President John F. Kennedy stands proudly behind the mission to advance the country and welcomes a diverse team to get the job done. That team includes three Black engineers who have a studded background — Charlie Cheathem, Nathaniel LeVert and Shelby Jacobs. However, the three men quickly realize that social progress is slower than scientific advancement.
A trip around the moon has been delayed because of the weather. AP's Lisa Dwyer reports.
Send us a textInvest in pre-IPO stocks with AG Dillon & Co. Contact aaron.dillon@agdillon.com to learn more. Financial advisors only. www.agdillon.com00:00 - Intro00:07 - Decagon's $250M Round Triples Valuation to $4.5B01:17 - xAI Gets $2B From Tesla After $20B Series E02:06 - Synthesia Raises $200M at $4B as ARR Targets Hit $200M03:25 - SpaceX Eyes June 2026 IPO With $50B Raise at $1.5T03:52 - SpaceX Starship V3 Heads for Mid-March04:27 - SpaceX Starlink Lands Gulf Air Deal as Fleet Rollout Begins Mid-Year05:19 - Richard Socher's Recursive Talks $4B Valuation and Big Compute Spend06:12 - Ricursive Hits $4B Valuation Two Months After Launch With $300M Series A07:04 - Automation Anywhere in Talks to Buy C3.AI and Go Public by Merger08:02 - Harvey Buys Hexus as $8B Secondary Valuation Holds Flat08:54 - Anduril Plans 5,500-Job Long Beach Buildout as Valuation Jumps 157%09:49 - Anthropic Brings Slack and Figma Into Claude as Valuation Tops $370B10:54 - Anthropic Wins GOV.UK Pilot to Build AI Assistant for Job Seekers11:48 - Anthropic Forecasts $18B Sales This Year, $55B Next, But Pushes Profit to 202813:02 - Moonshot's Kimi K2.5 Upgrades to Omni as Valuation Target Moves Toward $5B13:53 - OpenAI Preps for Q4 2026 IPO14:32 - OpenAI's $100B Raise Takes Shape With Amazon at $50B and SoftBank at $30B15:17 - OpenAI Launches Prism on GPT-5.2 as Science Usage Hits 8.4M Messages a Week16:28 - Tether Launches USAT With Anchorage as US Stablecoin Rules Tighten17:47 - Redwood Upsizes Series E to $425M as Google Joins and Valuation Clears $6B18:56 - US DOE Cuts Nuclear Rulebook by a Third as Startups Raise $1B+19:51 - Perplexity Signs $750M Microsoft Cloud Deal While Keeping AWS Preferred
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
DeepMind's AlphaGenome, The Amazon Layoffs, & China's Moonshot K2.5Full Audio including Detailed Analysis at https://podcasts.apple.com/us/podcast/ai-business-and-development-daily-news-rundown/id1684415169?i=1000747119039
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Agent swarms are quickly moving from theory to practice, with early 2026 model releases making coordinated, multi-agent work feel like a real shift rather than a niche experiment. This episode focuses on Moonshot's Kimi K2.5, what its agent swarm design reveals about the future of AI work, and why this may mark a transition from single assistants to teams of AI operating in parallel. In the headlines: Anthropic's huge new funding round and revised revenue forecasts, Nvidia chip sales reopening in China, a UK-wide AI upskilling initiative, and new agentic features from Google and Chinese labs. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsZencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflowOptimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybriefAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefSection - Build an AI workforce at scale - https://www.sectionai.com/LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Elon Musk went on the Moonshots podcast and said you don't need to save for retirement anymore because AI + robots will make work optional and money won't matter.If you're 55+, sitting on seven figures in a 401(k)/IRA, and you're trying to figure out when you can stop working, travel more, and play more golf — this episode is for you.In this video, I'll:• Play Elon's quote and explain what's going on• Break down the key takeaways from the full interview (energy/solar, longevity, UHI)• Explain why it's a terrible idea to change your retirement plan based on a viral clip• Give you 3 smarter moves you can make right nowThe 3 smarter retirement moves:1. Plan for longevity (modern medicine + AI could mean a longer retirement)2. Plan for higher taxes (UHI / Social Security / Medicare strain = tax risk)3. Plan for earlier retirement (AI disruption + layoffs could push you out sooner than expected)Are you interested in working with me 1 on 1? Click this link to fill out our Retirement Readiness QuestionnaireOr, visit my websiteConnect with me here:YouTubeJoin My Company NewsletterThis is for general education purposes only and should not be considered as tax, legal or investment advice.
Lukasz Gadowski ist einer der bekanntesten deutschen Internetunternehmer und Investoren. In dieser besonderen Longform-Folge spricht er über seinen Weg von den Anfängen mit Spreadshirt und Delivery Hero bis hin zu Investments in Flugtaxis, Batterietechnologie und Lasern. Es geht um die Unterschiede zwischen dem europäischen und dem US-Start-up-Ökosystem, um politische und wirtschaftliche Hürden, um die Lehren aus seinen größten Fehlern – und um die Frage, wie Europa echte Technologieriesen hervorbringen könnte. Was du aus der Folge mitnimmst: Warum Europa strukturell (noch) hinter den USA liegt und wie ein gemeinsamer Kapitalmarkt und konsistente Industriepolitik echte Tech-Giganten ermöglichen könnten Lukasz' Wandel von Internet- zu DeepTech-Investments: Flugtaxis, Laser, Batterien, Energie – und was ihn heute antreibt Warum Innovation in Konzernen schwierig ist und wie „Innovator's Dilemma“ verhindert, dass Old Economy neue Technologien wirklich voranbringt Teure Fehler und harte Learnings: Premature Scaling, Hardware-Risiken und der Unterschied, ob man Investor oder Gründer ist Wie Lukasz an neue Themen herangeht: Systematische Analyse von Technologie-Generationen, Moonshots und der Mut, sich auf Jahre einzulassen Karriere- und Lerntipps für junge Menschen: Fünf Bereiche (Finanzen, Technologie, Volkswirtschaft, Kunst, Jura), Theorie & Praxis, emotionale Stabilität und Meditation als „Trumpfkarte“ ALLES ZU UNICORN BAKERY: https://stan.store/fabiantausch Mehr zu Lukasz: LinkedIn: https://www.linkedin.com/in/lukaszgadowski/ Website: https://www.teamglobal.net/ Join our Founder Tactics Newsletter: 2x die Woche bekommst du die Taktiken der besten Gründer der Welt direkt ins Postfach: https://www.tactics.unicornbakery.de/ Kapitel: (00:00:00) Warum hinkt Europa hinterher? (00:02:37) Champions League: USA vs. Europa (00:06:38) Was müsste sich in Europa und Deutschland ändern? (00:09:22) Kapitalmarkt, Industriepolitik und das Innovators Dilemma (00:17:24) Wie müsste Politik für mehr Innovation aussehen? (00:26:18) Von Consumer Internet zu DeepTech – Lukasz' Themenwandel (00:30:21) Unterschiede: Internetökonomie vs. DeepTech (00:35:10) Warum (noch) nicht in KI investiert? (00:37:58) Wandel beim Angel Investing (00:38:56) Investor, Mitgründer oder beides? (00:41:17) Fehler & Learnings aus 20 Jahren Unternehmertum (00:44:36) Die teuersten Fehler: Cirque & Premature Scaling (00:47:14) Was unterscheidet erfolgreiche von weniger erfolgreichen Märkten? (00:49:44) Nächste Meilensteine: Spreadgroup, Miles, Laser, Batterien, Flugtaxis (00:54:26) Portfolio-Management & wie tief dabei sein? (00:56:17) Energie, Politik und die nächste Stromnetz-Generation (00:57:00) Rückblick: Gründerszene, Medien & Startup-Kultur (00:59:13) Was Lukasz heute jungen Leuten rät
SEO TitleBrené Brown: Atlas of the Heart | Emotional Intelligence, Naming Emotions & Values-Based Living (Moonshots Podcast #283)What if the key to personal growth, better relationships, and stronger leadership isn't doing more—but understanding what you feel?In Moonshots Podcast Episode 283, Mike and Mark explore Atlas of the Heart by Brené Brown, a groundbreaking framework for building emotional intelligence through language, courage, and clarity. Brené challenges us to notice the emotional patterns we repeat—and asks whether avoiding certain feelings is quietly limiting our growth.This episode begins with a powerful intervention: if we want different results in life, we must be willing to feel what we've been unwilling to feel before. Drawing on psychology, neuroscience, and lived experience, Brené explains why “naming is taming”—how accurately labeling emotions reduces emotional reactivity and restores choice.Mike and Mark unpack key emotional states such as envy, comparison, and schadenfreude, reframing them through a team mindset inspired by Abby Wambach. Instead of seeing others' success as a threat, we learn how emotional awareness helps us move from scarcity to connection.The episode closes with a practical exploration of emotional regulation vs. emotional suppression, showing how emotions can become reliable data—guiding us toward values-based decisions, intentional living, and healthier relationships.
The Space Show Presents A Special Open Lines Discussion, Sunday, 1-11-26Quick summaryThis program focused on discussing space industry developments and future predictions for 2026, with participants exploring topics like advancements in AI, robotics, and space technology. They debated the influence of private sector leaders like Elon Musk and Eric Schmidt on space policy and innovation, while also examining educational requirements needed to support future space endeavors. The group discussed the potential for breakthroughs in propulsion and energy solutions, as well as the search for extraterrestrial life, though they agreed current technologies would not yield significant results by 2026. The conversation concluded with reflections on how space advocacy might evolve over the next decade, particularly as costs decrease and more private sector involvement emerges.SummaryOur program got underway by discussing Dr. Phil Metzger's list of 20-21 important developments for the space industry in 2026, with John Jossy presenting key items. The discussion highlighted significant developments such as declining launch costs, reusable rocket technology, satellite broadband constellations, and AI-driven applications of satellite data. Negative impacts were also discussed, including supply chain volatility for semiconductors and potential delays in mega constellations due to AI demand and export rules. The Wisdom Team also touched on upcoming programs, including a special edition of the space show and a new Tuesday program featuring a CEO from a European company.We discussed Elon Musk's vision for medical robots and AI, with Marshall expressing both optimism and discomfort about the rapid pace of technological advancement. They explored Musk's plans for Starlink satellites, including in-space maintenance and potential cost savings, though settlement on Mars and the Moon was not extensively discussed. The conversation covered broader topics including AI's impact on labor, universal basic income, and the role of education in a changing world, with John Jossy noting that the discussion was part of Peter Diamandis' Moonshot podcast series.I believe that a valuable part of our overall discussion looked at the influence of innovative leaders in the space sector, with Manuel expressing concerns about the dominance of a few individuals, while David and John Jossy highlighted the need for ethical regulations and oversight. They debated the challenges of supervising innovative leaders like Elon Musk and David Sachs, with John Jossy emphasizing Sachs's role in advising the administration on AI regulations. Marshall agreed with David's point about the difficulty of overseeing geniuses, suggesting that market forces often limit harmful innovations. The part of the program concluded with a discussion on the future of space, including the role of private sectors and state actors, and the potential for partnerships between governments and the private sector.The Space Show Wisdom Team discussed future space exploration and technology developments over the next 10 years. Ryan predicted increased automation and robotics in orbital operations, while Marshall envisioned multiple lunar bases and the construction of space cities for manufacturing and AI development. David noted the absence of discussion on breakthrough propulsion technologies and emphasized the need for innovations that could benefit humanity on Earth. John Hunt mentioned Jared Isaacman's interest in nuclear propulsion for NASA, and Marshall suggested that nuclear fusion could be developed and used for space exploration, though primarily for pushing exploratory satellites.Future space technology and innovation was a topic, focusing on the potential of fusion energy, space solar power, and reduced costs for launching payloads to low Earth orbit (LEO). Marshall highlighted the significance of Starship Block 3, which is expected to significantly lower the cost per kilogram to LEO, enabling more projects and innovations. John Jossy mentioned ongoing developments in wireless power transmission and space-based solar power for AI data centers. David raised questions about the dependency of space innovation on government policies, suggesting a needed potential relationship between public sector support and private sector progress. The group agreed that 2026 could mark a significant breakthrough in space technology, driven by advancements in Starship and reduced launch costs.W also pointed to the potential political influence on emerging technologies, particularly in sectors like transportation and communications, with Ryan noting the significant financial interests at play. Marshall highlighted the challenges of adapting government agencies to innovations like robo-taxis and robo-airplanes, predicting major shifts in how air traffic control and state regulations function. John Jossy emphasized AI as the primary driver of current innovation, citing its impact on industries and venture capital investments, while Marshall and David agreed that AI development is closely linked to changes in energy production and societal education. David stressed the need for a strong educational foundation to support advancements in space and AI, expressing concern about the United States' declining educational performance compared to countries like China and Japan.The Wisdom Team discussed educational challenges in the United States, with John Jossy emphasizing the need to address root causes of poor educational outcomes at local and state levels. Manuel shared examples from Peru and Europe, including a public sector initiative for high-performing students and apprenticeship programs, while John Hunt noted increased STEM requirements in Missouri schools. The discussion highlighted the importance of educating competent individuals to meet future innovation and technology demands, with no clear consensus on specific solutions.The group discussed educational changes over time, with David and Marshall sharing their experiences with calculus and practical applications. They explored the possibility of using AI to improve education systems. The conversation then shifted to the search for extraterrestrial life, with John Jossy stating that current technologies are not advanced enough to detect extraterrestrial life in 2026. The group also discussed the recent announcement by Eric Schmidt of Relativity Space regarding funding for a replacement for the Hubble Space Telescope and three additional telescopes, with a projected cost of at least half a billion dollars. Finally, David posed a question about the future of space advocacy over the next 5-10 years, but the group did not reach a consensus on this topic.Also discussed were future trends in space advocacy and conferences, with Marshall suggesting that in 10 years, conferences might focus more on financing and promoting personal space projects rather than academic presentations. Dr. Zubrin's potential future involvement in space advocacy was mentioned, noting that at 74, he could continue his Mars advocacy work for another 20-25 years. The conversation ended with David announcing upcoming guests for the show, including Guy Schumann from Luxembourg, and a discussion about foreign spaceports, with Mark Whittington preparing a program about international spaceport developments.Special thanks to our sponsors:American Institute of Aeronautics and Astronautics, Helix Space in Luxembourg, Celestis Memorial Spaceflights, Astrox Corporation, Dr. Haym Benaroya of Rutgers University, The Space Settlement Progress Blog by John Jossy, The Atlantis Project, and Artless EntertainmentOur Toll Free Line for Live Broadcasts: 1-866-687-7223 (Not in service at this time)For real time program participation, email Dr. Space at: drspace@thespaceshow.com for instructions and access.The Space Show is a non-profit 501C3 through its parent, One Giant Leap Foundation, Inc. To donate via Pay Pal, use:To donate with Zelle, use the email address: david@onegiantleapfoundation.org.If you prefer donating with a check, please make the check payable to One Giant Leap Foundation and mail to:One Giant Leap Foundation, 11035 Lavender Hill Drive Ste. 160-306 Las Vegas, NV 89135Upcoming Programs:Broadcast 4487 ZOOM Guy Schumann | Tuesday 13 Jan 2026 930AM PTBroadcast 4488 Zoom, DR. ARMEN PAPAZIAN | Friday 16 Jan 2026 930AM PTGuests: Dr. Armen PapazianArmen presents his latest space economics paper which is posted on The Space Show blog for this program.Broadcast 4489 Zoom Dan Adamo | Sunday 18 Jan 2026 1200PM PTGuests: Dan AdamoZoom: Dan discusses the special lunar orbit being used for the Artemis program Get full access to The Space Show-One Giant Leap Foundation at doctorspace.substack.com/subscribe
Get access to metatrends 10+ years before anyone else Tony Robbins is a world-renowned American motivational speaker, life coach, author, and entrepreneur. Join Tony's free summit Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ Connect with Tony X Instagram Website Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on January 7th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
What if your website could spot its own problems, fix them, and quietly make more money while you focus on building your business? That question sat at the heart of my conversation with Aviv Frenkel, co-founder and CEO of Moonshot AI, and it speaks to a frustration almost every founder and digital leader recognizes. Traffic is expensive, attention is fragile, and even small issues in design or flow can quietly drain revenue for months before anyone notices. Traditional optimization often means long cycles, internal debates, and teams juggling analytics, design tools, and testing platforms while hoping the next experiment moves the needle. Aviv's perspective is shaped by lived experience. Before building Moonshot AI, he ran an e-commerce company that had plenty of visitors but disappointing conversion. Like many founders, he watched teams guess at fixes, wait weeks for tests to run, then struggle to link effort to outcome. Moonshot AI was born from that frustration, with a simple ambition. Let the website diagnose what is broken, generate solutions, test them, and deploy the winner automatically, without the need for a dedicated growth team. In our discussion, Aviv explained how Moonshot focuses on front-end experience and site performance, spotting issues such as unclear value propositions, poorly placed calls to action, or confusing mobile navigation. The platform generates its own design, copy, and code variants, runs live tests, and then rolls out what actually works. The results are hard to ignore. Brands across beauty, fashion, jewelry, and consumer electronics are seeing revenue per visitor lift by thirty to fifty percent within months. One small change to a mobile navigation menu at Hugh Jewelry led to a fifty seven percent increase in revenue per visitor, which is the kind of outcome that gets leadership teams paying attention. We also talked about momentum behind the company itself. A recently announced ten million dollar seed round has given Moonshot AI the resources to scale engineering and go-to-market teams at a time when demand is accelerating fast. But beyond funding and growth charts, what stood out most was Aviv's longer-term view. As more people turn to AI assistants and agents instead of traditional search, websites need to be structured so machines can understand them as clearly as humans. Moonshot is already optimizing for that future, preparing sites for an agent-driven web where the customer might be an algorithm as much as a person. Aviv also shared his personal journey, moving from a successful career as a tech journalist and TV host into the far more humbling world of building companies. Rejection, uncertainty, and hard lessons came with the territory, but so did clarity. His guiding idea, inspired by Jeff Bezos, is a minimum regret mindset, choosing the harder path now to avoid looking back later and wondering what might have been. So as AI moves from tools that assist to systems that act, and as websites become active participants in growth rather than static assets, the big question becomes this. Are you still relying on slow, manual optimization cycles, or are you ready to let your website start improving itself, and what does that shift mean for how you build and scale in the years ahead? Useful Links Connect with Aviv Frenkel Learn More About Moonshot AI Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on January 7th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Is your business ready for the AI deployment wave that just hit in 2025?Do you know which AI models and tools actually shipped—and which were just hype?Are you leveraging small and edge models, such as NanoBanana Pro, to stay ahead?What if your competitors are already using AI agents embedded in browsers and workflows?Hello, AI Entrepreneurs Community! Today, we are excited to break down the AI tsunami of 2025. This year, AI moved from headlines to hands-on usage across education, shopping, search, creative tools, and enterprise environments.We're diving through the most significant AI releases, from GPT-5.2's deep reasoning tiers to Gemini's takeover of the classroom, ChatGPT's entry into shopping, and China's explosive AI expansion with Moonshot, Quinn, and DeepSeek.This isn't just an update—it's your 2025 AI field guide, covering every verified product, platform, and deployment that truly mattered.Whether you're a founder, investor, or builder, this is your ultimate catch-up guide to the most verified, impactful, and game-changing AI developments across the globe
RNT_ Trump, SCOTUS, moonshots, and peace breakdowns
Egor Olteanu came to the US with his family as a teenager and joined the US Army after high school. He joined Google X after college and was lucky to work on some of the coolest R&D projects like Google Loon. Egor started VOLT with his co-founder in 2019. He loves spending his free time outdoors and is an avid Skydiver, SCUBA diver, and motorcycle/snowmobile rider. Egor has a BA in International Relations and MBA from American University, in Washington DC. Connect with Jon Dwoskin: Twitter: @jdwoskin Facebook: https://www.facebook.com/jonathan.dwoskin Instagram: https://www.instagram.com/thejondwoskinexperience/ Website: https://jondwoskin.com/LinkedIn: https://www.linkedin.com/in/jondwoskin/ Email: jon@jondwoskin.com Get Jon's Book: The Think Big Movement: Grow your business big. Very Big! Connect with Egor Olteanu:Website: volt.ai Linkedin: https://www.linkedin.com/in/egoro/ *E – explicit language may be used in this podcast.
In the final part of our exploration of Jon Kabat-Zinn's Nine Attitudes of Mindfulness, Mike and Mark slow things down—intentionally. This episode is an invitation to rest, to release the constant pressure to improve, and to rediscover the power of presence without agenda.Jon opens the conversation by reframing rest itself. True rest, he suggests, is not collapse or avoidance, but a no-agenda way of being—where nothing needs to be fixed, achieved, or optimised. From this place, mindfulness naturally deepens.The episode then explores non-striving, one of the most misunderstood attitudes of mindfulness. Jon reminds us that growth doesn't come from forcing outcomes, but from allowing life to unfold as it already is. When striving drops away, awareness has room to do its work.From there, Jon reflects on gratitude and generosity, encouraging us to meet each moment with appreciation and to be generous not just with things—but with our time, attention, and energy. These attitudes shift mindfulness from an inward practice to a way of relating to the world.As the episode unfolds, Jon beautifully weaves all nine attitudes together through the lens of heartfulness, showing how mindfulness is not a collection of techniques, but an integrated way of living with presence, compassion, and care.In the closing bonus reflection, Jon turns to mind-wandering and the rehabilitation of the present moment, a cornerstone of Mindfulness-Based Stress Reduction (MBSR). Rather than seeing distraction as failure, he reframes it as an opportunity—each noticing becomes a gentle return to now.This episode matters because it offers an antidote to burnout culture. Instead of pushing harder, it invites us to trust awareness, soften our effort, and remember that the present moment is already enough.
The Geek in Review closes 2025 with Greg Lambert and Marlene Gebauer welcoming back Sarah Glassmeyer and Niki Black for round two of the annual scorecard, equal parts receipts, reality check, and forward look into 2026. The conversation opens with a heartfelt remembrance of Kim Stein, a beloved KM community builder whose generosity showed up in conference dinners, happy hours, and day to day support across vendors and firms. With Kim's spirit in mind, the panel steps into the year-end ritual: name the surprises, own the misses, and offer a few grounded bets for what comes next.Last year's thesis predicted a shift from novelty to utility, yet 2025 felt closer to a rolling hype loop. Glassmeyer frames generative AI as a multi-purpose knife dropped on every desk at once, which left many teams unsure where to start, even when budgets already committed. Black brings the data lens: general-purpose gen AI use surged among lawyers, especially solos and small firms, while law firm adoption rose fast compared with earlier waves such as cloud computing, which crawled for years before pandemic pressure moved the needle. The group also flags a new social dynamic, status-driven tool chasing, plus a quiet trend toward business-tier ChatGPT, Gemini, and Claude as practical options for many matters when price tags for legal-only platforms sit out of reach for smaller shops.Hallucinations stay on the agenda, with the panel resisting both extremes: doom posts and fan club hype. Glassmeyer recounts a founder's quip, “hallucinations are a feature, not a bug,” then pivots to an older lesson from KeyCite and Shepard's training: verification never goes away, and lawyers always owed diligence, even before LLMs. Black adds a cautionary tale from recent sanctions, where a lawyer ran the same research through a stack of tools, creating a telephone effect and a document nobody fully controlled. Lambert notes a bright spot from the past six months: legal research outputs improved as vendors paired vector retrieval with legal hierarchy data, including court relationships and citation treatment, reducing off-target answers even while perfection stays out of reach.From there, the conversation turns to mashups across the market. Clio's acquisition of vLex becomes a headline example, raising questions about platform ecosystems, pricing power, and whether law drifts toward an Apple versus Android split. Black predicts integration work across billing, practice management, and research will matter as much as M&A, with general tech giants looming behind the scenes. Glassmeyer cheers broader access for smaller firms, while still warning about consolidation scars from legal publishing history and the risk of feature decay once startups enter corporate layers. The panel lands on a simple preference: interoperability, standards, and clean APIs beat a future where a handful of owners dictate terms.On governance, Black rejects surveillance fantasies and argues for damage control, strong training, and safe experimentation spaces, since shadow usage already happens on personal devices. Gebauer pushes for clearer value stories, and the guests agree early ROI shows up first in back office workflows, with longer-run upside tied to pricing models, AFAs, and buyer pushback on inflated hours. For staying oriented amid fractured social channels, the crew trades resources: AI Law Librarians, Legal Tech Week, Carolyn Elefant's how-to posts, Moonshots, Nate B. Jones, plus Ed Zitron's newsletter for a wider business lens. The crystal ball segment closes with a shared unease around AI finance, a likely shakeout among thinly funded tools, and a reminder to keep the human network strong as 2026 arrives.
Nickolas Natali is on a mission to hit $100K in monthly recurring revenue, and he's doing it out loud. In this inspiring and wildly honest conversation, Stacey Lauren sits down with Nickolas just weeks into launching “Project Moonshot,” a high-stakes challenge to radically scale his business. From living in a 1986 Suburban to becoming an ad agency founder, Nick shares the power of resourcefulness, gritty goal-setting, and community in staying the course. This one's a masterclass in doing the thing, especially when it's scary.In This Episode:00:00 – Launching “Project Moonshot” & the power of saying goals out loud04:55 – How Nick hacked a 3-year degree + the roots of resourcefulness08:22 – Paying off $60K in student loans by living in a car13:10 – Why Stacey launched the Billion Dollar Impact Marketplace18:41 – Integrity, community, and not giving up (even when it's hard)26:07 – Group psychology, impact entrepreneurship & moral marketing35:55 – Comfort Circles & helping people find their voice47:22 – Nick's pivot from podcasting to paid ads52:40 – The truth about repetitive discipline vs. shiny-object syndrome59:02 – Nick's real-time roadmap to $100K/month1:02:40 – Stacey gets coached by Nick—live!1:09:15 – The best kind of contagious growth YouTube: https://youtu.be/JiFjrTPOQtc Apple Podcasts: https://podcasts.apple.com/us/podcast/project-moonshot-nickolas-natalis-bold-%24100k-goal-the/id1618590178?i=1000742898505 Spotify: https://open.spotify.com/episode/6VRsceUuv4qFZ0MjPAohiF
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Matthew Fitzpatrick is the CEO at Invisible Technologies Learn about Invisible Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Matthew Linkedin Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 16th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Emad Mostaque is the founder of Intelligent Internet ( https://www.ii.inc ) Read Emad's Book: https://thelasteconomy.com Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Emad: Read Emad's Book X Learn about Intelligent Internet Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 18th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Caregiving touches every family, yet caregivers often remain unseen. In this conversation from HLTH in Las Vegas, StartUp Health co-founder Unity Stoakes sits down with Richard Lui, award-winning journalist, filmmaker, and Chief Impact Officer for StartUp Health's Caregiving Moonshot. Richard shares the personal story that sparked his mission to transform the global care economy and explains why caregiving is one of the largest and most meaningful opportunities in health. Together, they explore how innovators, investors, and leaders can build solutions that support the people holding our health system together. In this episode • Why caregiving must become a core pillar of every product and service in health• What Richard learned caring for his father with Alzheimer's• How storytelling and culture change are fueling new momentum• Where founders can find opportunity in the rapidly growing care economy• Why community and staying power are essential for caregiving innovators Join the Caregiving Moonshot If you are building solutions that support caregivers or strengthen the care economy, learn how to join our global community of Health Transformers. Meet in Person Join us at Apollo House at JPM Healthcare Week in January. Are you ready to tell YOUR story? Members of our Health Moonshot Communities are leading startups with breakthrough technology-driven solutions for the world's biggest health challenges. Exposure in StartUp Health Media to our global audience of investors and partners – including our podcast, newsletters, magazine, and YouTube channel – is a benefit of our Health Moonshot PRO Membership. To schedule a call and see if you qualify to join and increase brand awareness through our multi-media storytelling efforts, submit our three-minute application. If you're mission-driven, collaborative, and ready to contribute as much as you gain, you might be the perfect fit. » Learn more and apply today. Want more content like this? Sign up for StartUp Health Insider™ to get funding insights, news, and special updates delivered to your inbox.
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Mustafa Sulyman is the CEO of Microsoft AI Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Alex Website LinkedIn X Email Connect with Mustafa X Linkedin Listen to MOONSHOTS: Apple YouTube – *Recorded on December 5th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 12th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Emad Mostaque is the founder of Intelligent Internet ( https://www.ii.inc ) Read Emad's Book: https://thelasteconomy.com Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Emad: Read Emad's Book X Learn about Intelligent Internet Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 6th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Pre order my new book: diamandis.com/book If you want us to build a MOONSHOT gathering, email my team: moonshots@diamandis.com Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Naveen Jain is the founder & CEO at Viome Life Sciences Salim Ismail is the founder of OpenExO Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Connect with Naveen X Linkedin Listen to MOONSHOTS: Apple YouTube – *Recorded on December 2nd, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices