POPULARITY
Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe
Onze analist van dienst stond al met zijn neus bij de etalage voor een paar aandeeltjes Klarna, toen deze op $30 noteerde. Als een kindje dat pruimen zag hangen, o, als appelen zo groot! Maar wat kan hij in zijn handjes wrijven, want de bodem was voor Klarna nog lang niet in zicht. Na de beursgang in september verloor de Zweedse fintech 70% van haar marktwaarde. Terecht, of is Klarna de kans van de eeuw? En Trump stort beleggers wereldwijd weer in de onzekerheid met zijn heffingenheisa. Goed nieuws voor Azië, waar de nieuwe heffingen lager uitpakken dan de vorige. In het VK zullen ze daarentegen minder staan te springen. Wat is er dit weekend nou precies gebeurd, en wat betekent dat voor jou? Ook daar is genoeg om uit te pakken. Verder in deze show: Box-3 voer voor hoofdredactie Washington Post Netflix bestuurslid moet ONMIDDELIJK ontslagen worden van Trump, terwijl ze middenin de overnamestrijd rond Warner Bros zitten McDonalds is het nieuwe goud Waarom de Zuid-Koreanen dol zijn op hefboompjes VEB wil dat de AFM onderzoek gaat doen naar handel met voorkennis in aandelen van InPost Te gast: Justin Blekemolen van online broker Lynx BNR Beurs is een journalistiek onafhankelijke productie, mede mogelijk gemaakt door Saxo. Over de makers: Jelle Maasbach is presentator van BNR Beurs en freelance financieel journalist. Zijn favoriete aandeel om over te praten is Disney, maar daar lijkt hij de enige in te zijn. Sinds de eerste uitzending van BNR Beurs is 'ie er bij. Maxim van Mil is presentator van BNR Beurs en journalist bij BNR, waar hij zich focust op de financiële markten en ontwikkelingen in de tech-wereld. Je krijgt hem het meest enthousiast als hij kan praten over ASML, of oer-Hollandse bedrijven zoals Ahold of ABN Amro. Jorik Simonides is presentator van BNR Beurs, economieredacteur en verslaggever bij BNR. Hij wordt er vooral blij van als het een keer níet over AI gaat. Milou Brand is presentator van BNR Beurs, freelance podcastmaker en columnist bij het Financieele Dagblad. Jochem Visser is presentator van BNR Beurs, maakt Beursnerd XL en is redacteur bij BNR Zakendoen en de podcast Onder Curatoren. Vraag hem naar obscure zaken op financiële markten en hij vertelt je waarom het eigenlijk nóg leuker is dan je al dacht. Over de podcast: Met BNR Beurs ga je altijd voorbereid de nieuwe beursdag in. We praten je in een kleine 25 minuten bij over alle laatste ontwikkelingen op de handelsvloer. We blijven niet alleen bij de AEX of Wall Street, maar vertellen je ook waar nog meer kansen liggen. En we houden het niet bij de cijfers, maar zoeken ook iedere dag voor je naar duiding van scherpe gasten en experts. Of je nu een ervaren belegger bent of net begint met je eerste stappen op de beurs, de podcast biedt waardevolle inzichten voor je beleggingsstrategie. Door de focus op zowel de korte termijn als de lange termijn, helpt BNR Beurs luisteraars om de ruis van de markt te scheiden van de essentie. Van Musk tot Microsoft en van Ahold tot ASML. Wij vertellen je wat beleggers bezighoudt, wie de markten in beweging zet en wat dat betekent voor jouw beleggingsportefeuille.See omnystudio.com/listener for privacy information.
AI-ontwikkeling in Europa gaat misschien wel beter dan verwacht. Er wordt steeds meer geld geïnvesteerd in de sector, zoals recent de investering van ASML van een miljard euro in het Franse Mistral. Ook CuspAI van Max Welling haalde ruim 100 miljoen dollar binnen van onder meer Nvidia en Prosus. Maar gaat het in Europa wel snel genoeg? En is het momentum voor AI-ontwikkeling in Europa wel sterk genoeg? Daarover te gast bij Ben van der Burg en Joe van Burik in deze aflevering: Paul van der Boor, VP AI bij Prosus en Michiel Bakker, Assistant Professor bij MIT & AI-onderzoeker bij Google DeepMind. Vragen, opmerkingen of suggesties? Mail ons! Op: degrotetechshow@bnr.nl De Grote Tech ShowTech verandert onze wereld, in De Grote Tech Show (DGTS) hoor je hoe. Joe van Burik en Ben van der Burg spreken met innovatieleiders en analyseren de techwereld, van AI tot cybersecurity en social media tot quantumcomputers. TechpodcastDe Grote Tech Show (DGTS) is dé techpodcast (en radioshow) voor iedereen die technologie en innovatie echt wil begrijpen. Over AI (of: kunstmatige intelligentie), chips, cloud, cyberveiligheid, social media, quantum en entertainment. Hier hoor je hoe technologie de wereld verandert en wat dat betekent voor bedrijven, investeerders en iedereen in de samenleving. Bij DGTS krijg je de analyses, inzichten en interviews die ertoe doen. Met diepgaande gesprekken en scherpe analyses brengen we de belangrijkste technologische ontwikkelingen in kaart. InnovatiesElke week spreken we kopstukken in de techwereld: ceo's, hoogleraren, ondernemers en investeerders die werken aan de innovaties van morgen. Wat betekenen de nieuwste AI-modellen voor werk en creativiteit? Hoe blijven Europese startups concurreren met het nog altijd machtige Silicon Valley en het ondoorzichtige China? Dit zijn geen oppervlakkige interviews, maar diepgaande gesprekken waarin we de hoofdrolspelers spreken die écht impact maken. De technologische revolutie is in volle gang en beïnvloedt elk aspect van ons leven—van de manier waarop we werken en communiceren tot de geopolitieke machtsverhoudingen. Daarom brengen we niet alleen de technologische kant in beeld, maar ook de economische en maatschappelijke implicaties ervan. Naast de grote innovaties kijken we naar de bedrijven die deze ontwikkelingen vormgeven. Wat is de strategie van big tech-bedrijven zoals Google, Apple, Microsoft en Meta? Hoe verandert de concurrentiestrijd tussen Nvidia, AMD en Intel de chipmarkt? Wat betekenen nieuwe wetten en regels in Europa en de VS voor de toekomst van technologie? AnalysesDaarnaast hoor je bij De Grote Tech Show, exclusief als extra podcast elke week, hoe Joe van Burik en Ben van der Burg de week in tech doornemen. Ze analyseren het laatste nieuws, plaatsen de ontwikkelingen in perspectief en geven scherpe inzichten over wat er écht speelt. Van de doorbraken in AI / kunstmatige intelligentie en de opkomst van nieuwe sociale mediaplatformen tot de impact van geopolitieke spanningen op de halfgeleiderindustrie. Regelmatig schuift een gast uit het netwerk aan om extra expertise te bieden en het debat te verdiepen. Door de combinatie van journalistieke scherpte, technische kennis en een kritische blik ontstaat een programma dat verder gaat dan de headlines en technologie in een bredere context plaatst.AIOf het nu gaat om de risico’s en kansen van AI-technologie of de positie van Europa in de wereldwijde technologische concurrentiestrijd, De Grote Tech Show biedt de achtergrond, de nuance en de inzichten die nodig zijn om deze ontwikkelingen echt te begrijpen. Dit maakt het programma onmisbaar voor professionals in de techsector, beleggers die strategische beslissingen willen nemen en iedereen die wil weten welke innovaties onze toekomst vormgeven. Met de combinatie van exclusieve interviews, deskundige duiding en een kritische kijk op innovatie biedt DGTS een unieke mix van diepgang en actualiteit. Over de makersJoe van Burik volgt en analyseert de belangrijkste ontwikkelingen in tech, met scherpte, tempo en humor. Je hoort hem dagelijks op BNR Nieuwsradio met het belangrijkste nieuws in de Tech Update en hij presenteert De Grote Tech Show. In het bijzonder volgt Joe al twee decennia de wereld van videogames, waarover hij met bevlogen collega's en gasten praat in de podcast All in the Game. Eerder werkte hij als auto(sport)journalist voor diverse andere media en schreef het boek Formule 1 voor Dummies. Ben van der Burg is techondernemer en voormalig topschaatser. Ben is bezeten door technologie en wordt enthousiast van gadgets, elektrische auto's, goede businessmodellen en de toekomst. Naast De Grote Tech Show is hij ook wekelijks te horen als presentator van De Technoloog. Ook schuift hij regelmatig aan bij Vandaag Inside, Goedemorgen Nederland en andere talkshows, om te praten over het laatste nieuws rond technologie. Daniël Mol is redacteur en samensteller van De Grote Tech Show. Hij presenteert zelf bij BNR de Cryptocast en maakt ook De Technoloog. Tevens is hij de vaste vervanger van Ben in De Grote Tech Show; Joe wordt bij afwezigheid vervangen door Iwan Verrips, co-host en eindredacteur van de Ochtendspits met Bas van Werven op BNR Nieuwsradio.See omnystudio.com/listener for privacy information.
We wanted to know who Cusp AI's customers are, what the platform looks like and how it brings together AI and domain knowledge. And Max explains what his product looks like in the end.
Kunstmatige intelligentie is veel breder inzetbaar dan het genereren van teksten en plaatjes, of auto's autonoom laten rijden. Denk ook aan het ontwikkelen van nieuwe eiwitten, of andere materialen, die enorm moeten helpen in de algemene verduurzaming en de strijd tegen klimaatverandering. En CuspAI, het nieuwe bedrijf van de Nederlandse AI-wetenschapper Max Welling, gaat juist dat doen. Joe van Burik en Ben van der Burg verdiepen zich in deze aflevering in dat onderwerp.
Kunstmatige intelligentie is veel breder inzetbaar dan het genereren van teksten en plaatjes, of auto's autonoom laten rijden. Denk ook aan het ontwikkelen van nieuwe eiwitten, of andere materialen, die enorm moeten helpen in de algemene verduurzaming en de strijd tegen klimaatverandering. En CuspAI, het nieuwe bedrijf van de Nederlandse AI-wetenschapper Max Welling, gaat juist dat doen. Joe van Burik en Ben van der Burg verdiepen zich in deze aflevering in dat onderwerp. De markt lijkt in ieder geval enthousiast voor het idee van Welling. CuspAI haalde in zijn seed-ronde 30 miljoen dollar op. Advies krijgt het bedrijf van AI-pionier Geoffrey Hinton en een samenwerking met Meta is al in kannen en kruiken. Kortom: voor CuspAI lijkt een veelbelovende toekomst weggelegd. Maar hoe werkt het bedrijf? Welke usecases zijn er precies? En is het klimaatproblemen 'oplossen' met AI niet tegenstrijdig, gezien de hoge energiekosten van deze technologie? We vragen het aan Max Welling, hoogleraar machine learning aan de Universiteit van Amsterdam en mede-oprichter van CuspAI. Holografie geen sci-fi meer? Bij holografie denk je misschien al snel aan Star Wars, of andere Sci-Fi producties. Of misschien wel aan het virtuele optreden van rapper Tupac op Coachella. Inmiddels krijgt deze technologie steeds meer vorm, mede door een bedrijf uit Nederland. Holoconnects ziet holografie als oplossing voor het gebrek aan menselijk contact, dat gepaard gaat met de opkomst van de smartphone, AI en al die andere technologie die ons aan een scherm gekluisterd houdt. En daar blijkt vraag naar te zijn: onder meer vanuit ziekenhuizen en hotels. De oprichter en CEO van Holoconnects, Andre Smith, legt uit welke oplossing zijn technologie brengt. RobocabHet robot-event van Elon Musk's Tesla deed veel stof opwaaien. Enerzijds door zijn spectaculaire visie op de toekomst van autonoom rijden (en de taxi-industrie), anderzijds door een toekomstbeeld dat moeilijk haalbaar lijkt. Zo blijken de robot-bediendes op het evenement door medewerkers van Tesla te zijn aangestuurd. En 'Full Self Driving'? Ook daarvan is nog maar de vraag of het gaat lukken. En dan hebben we het nog niet eens gehad over de regelgeving, bijvoorbeeld in Europa. De toekomstvisie van Musk bespreken we met Marieke Martens, Professor Automated Vehicles and Human Interaction aan de TU Eindhoven. Én Director of Science Mobility bij TNO. Meer podcasts over tech? Luister dan naar de Cryptocast, All in the Game, De Technoloog en de Tech Update.See omnystudio.com/listener for privacy information.
Heute zu Gast ist der promovierte Hochenergiephysiker Johannes Brandstetter, der sich mit der Erforschung des neu-entdeckten Higgs-Bosons am CMS Experiment in CERN befasste
Heute zu Gast ist der promovierte Hochenergiephysiker Johannes Brandstetter, der sich mit der Erforschung des neu-entdeckten Higgs-Bosons am CMS Experiment in CERN befasste
Heute zu Gast ist der promovierte Hochenergiephysiker Johannes Brandstetter, der sich mit der Erforschung des neu-entdeckten Higgs-Bosons am CMS Experiment in CERN befasste
In this episode, Neil interviews Professor Max Welling, one of the foremost experts in Machine Learning about AI4Science: the use of machine learning and AI to solve challenges in various scientific disciplines. They discuss and debate between data-driven and physics-driven approaches, the potential for foundational models, the importance of open sourcing models and data, the challenges of data sharing in science, and the ethical considerations of releasing powerful models. The conversation covers the role of academia, industry, and startups in driving innovation, with a focus on the field of AI. Professor Welling discusses the advantages and limitations of each sector and shares his experience in academia, big tech companies, and startups. The conversation then shifts to Professor Wellings new company; CuspAI, which focuses on material discovery for carbon capture using metal organic frameworks and machine learning. Prof. Welling provides insights into the potential applications of this technology and the importance of addressing sustainability challenges. The conversation concludes with a discussion on career advice and the future of AI for science.Links CuspAI : https://www.cusp.ai University website: https://staff.fnwi.uva.nl/m.welling/Google scholar: https://scholar.google.com/citations?user=8200InoAAAAJ&hl=enAI4Science NeurIPS 2023 workshop: https://neurips.cc/virtual/2023/workshop/66548 AI4Science NeurIPS 2022 workshop: https://nips.cc/virtual/2022/workshop/50019Aurora paper: https://arxiv.org/abs/2405.13063 Chapters00:00 Introduction to the Neil Ashton Podcast00:39 Guest Introduction: Professor Max Welling11:12 Data-Driven vs. Physics-Driven Approaches in Machine Learning for Science17:00 Foundational models for science23:08 Discussion around Open-Sourcing Models and Data29:26 Ethical Considerations in Releasing Powerful Models for Public Use33:14 Collaboration and Shared Resources in Addressing Global Challenges34:07 The Role of Academia, Industry, and Startups43:27 Material Discovery for Carbon Capture52:02 Career Advice for Early-stage Researchers01:01:07 The Future of AI for Science and SustainabilityKeywordsAI for science, machine learning, data-driven approaches, physics-driven approaches, foundational models, open sourcing, data sharing, ethical considerations, blockchain technology, academia, industry, startups, AI, material discovery, carbon capture, metal organic frameworks, machine learning, sustainability, career advice, future of AI for science
We kunnen inmiddels heel behoorlijk chatten, autonoom rijden en plaatjes genereren door kunstmatige intelligentie, maar nu krijgt ook de natuurkunde hulp van kunstmatige intelligentie. Met de computer kunnen tegenwoordig natuurwetten worden gesimuleerd. Hoe dat werkt en vooral wat dat oplevert vraagt De Technoloog aan Max Welling, hoogleraar machine learning aan de UvA en leider van het Microsoft Research Lab.
We kunnen inmiddels heel behoorlijk chatten, autonoom rijden en plaatjes genereren door kunstmatige intelligentie, maar nu krijgt ook de natuurkunde hulp van kunstmatige intelligentie. Met de computer kunnen tegenwoordig natuurwetten worden gesimuleerd. Hoe dat werkt en vooral wat dat oplevert vraagt De Technoloog aan Max Welling, hoogleraar machine learning aan de UvA en leider van het Microsoft Research Lab. Gast Max Welling Links Meer Max Welling! Video YouTube Hosts Herbert Blankesteijn & Ben van der Burg Redactie Daniël Mol See omnystudio.com/listener for privacy information.
Kunstmatige intelligentie wordt steeds belangrijker en zien we alsmaar vaker terug in allerlei toepassingen, maar is deze technologie straks in staat de grote problemen in de wereld op te lossen? Dat kan zomaar, bijvoorbeeld door belangrijke doorbraken in de wetenschap te forceren. Hoe dat precies werkt? Dat krijg je te horen van Max Welling, hoogleraar Machine Learning aan de UvA en onderzoeker bij Microsoft Research. Datakluis als antwoord op big techDe NPO en alle andere grote mediapartijen in Nederland slaan de handen ineen en openen de aanval op big tech. De inzet: zoveel mogelijk advertentiegelden binnen de Nederlandse markt houden. Hoe? Door voor iedere Nederlander een persoonlijke datakluis te bouwen, met behulp van Solid. Martijn van Dam stopte als bestuurder bij de NPO en richt zich nu volledig op de ontwikkeling van dit nieuwe stukje software. Apple's 'slooprobot' DaisyVerslaggever en BNR's eigen Apple-adept Martijn de Rijk ging op bezoek bij Apple in Breda om Daisy te bekijken, de grootste Apple ooit. Daisy haalt binnen mum van tijd gebruikte iPhones uit elkaar, om zo de belangrijkste onderdelen klaar te maken voor een nieuw leven. Martijn zocht onder meer uit waarom Apple deze robot heeft gebouwd. Meer podcasts over tech? Luister dan naar All in the Game, De Technoloog en de Tech Update.See omnystudio.com/listener for privacy information.
Gasten in BNR's Big Five van de Kunstmatige Intelligentie - Gerrit Timmer, oprichter en CSO bij Ortec - Haroon Sheikh, hoogleraar Strategic Governance of Global Technologies aan de VU, medewerker WRR - Max Welling, hoogleraar machine learning aan de UvA - Carlo van de Weijer, General Manager Eindhoven AI Systems Institute - Hervé Huisman, CEO Gradyent See omnystudio.com/listener for privacy information.
Bij grote AI-doorbraken speelt machine learning vaak een belangrijke rol. Welke ontwikkelingen zijn het meest veelbelovend? Te gast is Max Welling, hoogleraar machine learning aan de UvA en leider van het Microsoft Research Lab
Unlocking the challenge of molecular simulation has the potential to yield significant breakthroughs in how we tackle such societal issues as climate change, drug discovery, and the treatment of disease, and Microsoft is ramping up its efforts in the space. In this episode, Chris Bishop, Lab Director of Microsoft Research Cambridge, welcomes renowned machine learning researcher Max Welling to the Microsoft Research team as head of the new Amsterdam lab. Connecting over their shared physics background and vision for molecular simulation, Bishop and Welling explore several fascinating topics, including a future in which machine learning and quantum computing will be used in tandem to model molecules, the power of machine learning to provide “on demand” data in this space, and goals for the first year and beyond at the Amsterdam lab. https://www.microsoft.com/research
Today we had a fantastic conversation with Professor Max Welling, VP of Technology, Qualcomm Technologies Netherlands B.V. Max is a strong believer in the power of data and computation and its relevance to artificial intelligence. There is a fundamental blank slate paradgm in machine learning, experience and data alone currently rule the roost. Max wants to build a house of domain knowledge on top of that blank slate. Max thinks there are no predictions without assumptions, no generalization without inductive bias. The bias-variance tradeoff tells us that we need to use additional human knowledge when data is insufficient. Max Welling has pioneered many of the most sophistocated inductive priors in DL models developed in recent years, allowing us to use Deep Learning with non-euclidean data i.e. on graphs/topology (a field we now called "geometric deep learning") or allowing network architectures to recognise new symmetries in the data for example gauge or SE(3) equivariance. Max has also brought many other concepts from his physics playbook into ML, for example quantum and even Bayesian approaches. This is not an episode to miss, it might be our best yet! Panel: Dr. Tim Scarfe, Yannic Kilcher, Alex Stenlake 00:00:00 Show introduction 00:04:37 Protein Fold from DeepMind -- did it use SE(3) transformer? 00:09:58 How has machine learning progressed 00:19:57 Quantum Deformed Neural Networks paper 00:22:54 Probabilistic Numeric Convolutional Neural Networks paper 00:27:04 Ilia Karmanov from Qualcomm interview mini segment 00:32:04 Main Show Intro 00:35:21 How is Max known in the community? 00:36:35 How Max nurtures talent, freedom and relationship is key 00:40:30 Selecting research directions and guidance 00:43:42 Priors vs experience (bias/variance trade-off) 00:48:47 Generative models and GPT-3 00:51:57 Bias/variance trade off -- when do priors hurt us 00:54:48 Capsule networks 01:03:09 Which old ideas whould we revive 01:04:36 Hardware lottery paper 01:07:50 Greatness can't be planned (Kenneth Stanley reference) 01:09:10 A new sort of peer review and originality 01:11:57 Quantum Computing 01:14:25 Quantum deformed neural networks paper 01:21:57 Probabalistic numeric convolutional neural networks 01:26:35 Matrix exponential 01:28:44 Other ideas from physics i.e. chaos, holography, renormalisation 01:34:25 Reddit 01:37:19 Open review system in ML 01:41:43 Outro
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In case you missed it, Max joined us last year to discuss his work on Gauge Equivariant CNNs and Generative Models - the 2nd most popular episode of 2019. In this conversation, we explore the concept and Max’s work in neural augmentation, and how it’s being deployed for channel tracking and other applications. We also discuss their current work on federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design. The complete show notes for this episode can be found at twimlai.com/talk/398. This episode is sponsored by Qualcomm Technologies.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm. Babak works closely with former guest Max Welling and is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning. We also discuss the paper TimeGate: Conditional Gating of Segments in Long-range Activities, and another paper from this year’s ICLR conference, Batch-Shaping for Learning Conditional Channel Gated Networks. We cover how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more! For more information on the episode, visit twimlai.com/talk/385. To follow along with the CVPR 2020 Series, visit twimlai.com/cvpr20. Thanks to Qualcomm for sponsoring today’s episode and the CVPR 2020 Series!
For the last decade, advances in machine learning have come from two things: improved compute power and better algorithms. These two areas have become somewhat siloed in most people’s thinking: we tend to imagine that there are people who build hardware, and people who make algorithms, and that there isn’t much overlap between the two. But this picture is wrong. Hardware constraints can and do inform algorithm design, and algorithms can be used to optimize hardware. Increasingly, compute and modelling are being optimized together, by people with expertise in both areas. My guest today is one of the world’s leading experts on hardware/software integration for machine learning applications. Max Welling is a former physicist and currently works as VP Technologies at Qualcomm, a world-leading chip manufacturer, in addition to which he’s also a machine learning researcher with affiliations at UC Irvine, CIFAR and the University of Amsterdam.
Prof. Max Welling shared insightful perspectives on distributed machine learning, edge computing, the differences in AI research between Europe and the US, and highlights from 6 research papers accepted at NeurIPS 2019. View full video and more inspiring AI talks at Robin.ly: http://bit.ly/2uJJqgQ Follow us for timely updates of new AI talks: Youtube: https://www.youtube.com/channel/UC3RT8-VstUCH5B9uI3IIXBQ LinkedIn: https://www.linkedin.com/company/robinly/ Twitter: https://twitter.com/JoinRobinly Newsletter: http://bit.ly/2TptMBy
Jarno Duursma is spreker, trendwatcher & auteur. In dit podcast interview praten we voornamelijk over kunstmatige intelligentie. Wat zijn de kansen van deze technologische ontwikkeling? Wat zijn gevaren, zoals de impact van Deepfake op onze politiek? Hoe ga je als mens om met de snelheid en kracht van kunstmatige intelligentie. Na afloop van het interview met Jarno hoor je in de podcast nog een gesprek dat ik had met professor Max Welling (Universiteit van Amsterdam en Qualcomm) over kunstmatige intelligentie, inclusief een uitleg van de verschillende subonderdelen. De shownotes kun je vinden op https://biohackingimpact.nl
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. Tijmen is also co-founder of ML startup Scyfer, along with Qualcomm colleague Max Welling, who we spoke with back on episode 267. In our conversation with Tijmen we discuss: • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression, • We also look at a few recent papers including “Lottery Hypothesis." Check out the full show notes at twimlai.com/talk/292.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. As we’ve explored in our conversations with both Gary Brotman and Max Welling, Qualcomm has a hand in tons of machine learning research and hardware, and our conversation with Jeff is no different. We discuss: • How the various training frameworks fit into the developer experience when working with their chipsets. • Examples of federated learning in the wild. • The role inference will play in data center devices and more. The complete show notes for this episode can be found at twimlai.com/talk/280. Register for TWIMLcon now at twimlcon.com. Thanks to Qualcomm for their sponsorship of today's episode! Check out what they're up to at twimlai.com/qualcomm.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, as well as VP of technologies at Qualcomm, and Fellow at the Canadian Institute for Advanced Research, or CIFAR. In our conversation, we discuss: • Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, and in power efficiency for AI via compression, quantization, and compilation. • Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and compute. The complete show notes for this episode can be found at twimlai.com/talk/267. Thanks to Qualcomm for sponsoring today's episode! Check out what they're up to at twimlai.com/qualcomm.
In this episode, Byron and Max Welling of Qualcomm discuss the nature of intelligence and its relationship with intuition, evolution, and need. Episode 82: A Conversation with Max Welling
In this episode, Byron and Max Welling of Qualcomm discuss the nature of intelligence and its relationship with intuition, evolution, and need. Episode 82: A Conversation with Max Welling
In this episode, Byron and Max Welling of Qualcomm discuss the nature of intelligence and its relationship with intuition, evolution, and need. Episode 82: A Conversation with Max Welling
Daarover praten drie gasten in deze VNO-NCW-podcast. Want wat is AI, wat is er mogelijk en wat zijn de ethische gevoeligheden? 1) Max Welling, hoogleraar machine learning aan de Universiteit van Amsterdam: 'Mensen veranderen altijd mee met de technologie. Toen we een bijl hadden, hakten we bomen om en gingen we in huizen wonen. Misschien dragen we in de toekomst chips in ons lichaam.' 2) Rina Joosten, medeoprichter van Seedlink dat werkgevers en sollicitanten matcht op basis van spraak 'want of iemand bijvoorbeeld een teamplayer is haal je heel moeilijk uit een cv'. 3) Jeroen van den Hoven, professor ethiek aan de TU Delft, werkt samen met wetenschappers en techneuten om vroegtijdig te kijken naar ethische bezwaren. En hoe dat al op de tekentafel te verbeteren. 'Nederland loopt daarin voorop.' NB Deze podcast werd eerder uitgebracht ter voorbereiding op de 57ste Bilderbergconferentie. De interviews met Welling, Joosten en Van der Hoven zijn hetzelfde. In deze eerdere versie geeft ook VNO-NCW-voorzitter Hans de Boer -in het begin en het einde van de podcast- zijn kijk op AI. Deze podcast werd gemaakt door Adinda Akkermans, Alfred Koster en Tom Loois.
De toekomst van de technologie en innovatie: een kans of bedreiging? Dit thema wordt besproken in deze eerste VNO-NCW-podcast, als extra aanvulling op de Bilderbergconferentie 2019 van VNO-NCW, met Hans de Boer (voorzitter VNO-NCW) en drie gasten: 1) Max Welling, hoogleraar machine learning aan de Universiteit van Amsterdam: 'Mensen veranderen altijd mee met de technologie. Toen we een bijl hadden, hakten we bomen om en gingen we in huizen wonen. Misschien dragen we in de toekomst chips in ons lichaam.' 2) Rina Joosten, medeoprichter van Seedlink dat werkgevers en sollicitanten matcht op basis van spraak 'want of iemand bijvoorbeeld een teamplayer is haal je heel moeilijk uit een cv'. 3) Jeroen van den Hoven, professor ethiek aan de TU Delft, werkt samen met wetenschappers en techneuten om vroegtijdig te kijken naar ethische bezwaren. En hoe dat al op de tekentafel te verbeteren. 'Nederland loopt daarin voorop.' Deze podcast werd gemaakt door Adinda Akkermans, Alfred Koster en Tom Loois.
Max Welling is sinds 2013 hoogleraar Machine learning aan de Faculteit der Natuurwetenschappen, Wiskunde en Informatica van de Universiteit van Amsterdam. Volgens de UvA-persvoorlichting doet hij onder meer onderzoek naar ‘lerende systemen en hun toepassing bij de analyse van grootschalige datasets, hoe inzichten uit de neurowetenschap en de cognitieve wetenschap over menselijk leren kunnen worden ingezet voor computergestuurd leren, en hoe we machines kunnen ontwerpen die altijd blijven doorleren en de complexiteit van het interne model automatisch blijven aanpassen aan nieuwe informatie.’ Kortom, Jelle Brandt Corstius verstaat zich vanavond over artificiële intelligentie met Welling, van wie op 12 juni 2016 de Paradiso-lezing ‘De empathische machine. Wanneer zullen machines ons echt begrijpen?’ gepland staat.
Max Welling is sinds 2013 hoogleraar Machine learning aan de Faculteit der Natuurwetenschappen, Wiskunde en Informatica van de Universiteit van Amsterdam. Volgens de UvA-persvoorlichting doet hij onder meer onderzoek naar ‘lerende systemen en hun toepassing bij de analyse van grootschalige datasets, hoe inzichten uit de neurowetenschap en de cognitieve wetenschap over menselijk leren kunnen worden ingezet voor computergestuurd leren, en hoe we machines kunnen ontwerpen die altijd blijven doorleren en de complexiteit van het interne model automatisch blijven aanpassen aan nieuwe informatie'.
In episode fifteen we talk with Max Welling, of the University of Amsterdam and University of California Irvine. We talk with him about his work with extremely large data and big business and machine learning. Max was program co-chair for NIPS in 2013 when Mark Zuckerberg visited the conference, an event which Max wrote very thoughtfully about. We also take a listener question about the relationship between machine learning and artificial intelligence. Plus, we get an introduction to change point detection. For more on change point detection check out the work of Paul Fearnhead of Lancaster University. Ryan also has a paper on the topic from way back when.
In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed.