Podcasts about casp

  • 119PODCASTS
  • 199EPISODES
  • 45mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 25, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about casp

Latest podcast episodes about casp

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Editor's note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications.In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!).We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It's a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss:* Why symmetry and equivariance matter in deep learning* The tradeoff between scale and inductive bias* The deep mathematical links between diffusion models and stochastic thermodynamics* Why materials—not software—may be the real bottleneck for AI and the energy transition* What it actually takes to build an AI-driven materials platformMax reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress.Full Video EpisodeTimestamps* 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer* Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation.* 00:00:44 – From Quantum Gravity to AI for Materials* Brandon frames Max's career arc: VAE pioneer → equivariant GNNs → materials startup founder.* 00:01:34 – Curiosity vs Impact: How His Motivation Evolved* Max explains the shift from pure theoretical curiosity to climate-driven impact.* 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy* Politics struggles; technology scales. Why materials innovation became the focus.* 00:03:39 – The Thread: Physics → Symmetry → Machine Learning* How gauge symmetry, group theory, and relativity informed equivariant neural networks.* 00:06:52 – AI for Science Is Exploding (Not Emerging)* The funding surge and why AI-for-Science feels like a new industrial era.* 00:07:53 – Why Now? The Two Catalysts Behind AI for Science* Protein folding, ML force fields, and the tipping point moment.* 00:10:12 – How Engineers Can Enter AI for Science* Practical pathways: curriculum, workshops, cross-disciplinary training.* 00:11:28 – Why Materials Matter More Than Software* The argument that everything—LLMs included—rests on materials innovation.* 00:13:02 – Materials as a Search Engine* The vision: automated exploration of chemical space like querying Google.* 01:14:48 – Inside CuspAI: The Platform Architecture* Generative models + multi-scale digital twin + experiment loop.* 00:21:17 – Automating Chemistry: Human-in-the-Loop First* Start manual → modular tools → agents → increasing autonomy.* 00:25:04 – Moonshots vs Incremental Wins* Balancing lighthouse materials with paid partnerships.* 00:26:22 – Why Breakthroughs Will Still Require Humans* Automation is vertical-specific and iterative.* 00:29:01 – What Is Equivariance (In Plain English)?* Symmetry in neural networks explained with the bottle example.* 00:30:01 – Why Not Just Use Data Augmentation?* The optimization trade-off between inductive bias and data scale.* 00:31:55 – Generative AI Meets Stochastic Thermodynamics* His upcoming book and the unification of diffusion models and physics.* 00:33:44 – When the Book Drops (ICLR?)TranscriptMax: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known, as possible even. It's a bit hard to program because you have to do all these experiments. Those are quite bulky, it's like a very large thing you have to do. But in a way it is a computation and that's the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in.[01:00:44:14 - 01:01:34:08]Brandon: Yeah, it's a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I've been so excited about. If you're in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you're a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on?[01:01:34:08 - 01:02:41:13]Max: So it has actually evolved a lot. In my young days, let's breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what's inside of a black hole or what's at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there's a new dimension coming to it and there's this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there's going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change.[01:02:43:15 - 01:03:19:11]Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that's why we started CaspAI. But there's also a lot of really interesting science problems in material science. And so it's kind of combining both the impact you can make with it as well as the interesting science. So it's sort of these two dimensions, like working on things which you feel there's like, well, there's something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world.[01:03:19:11 - 01:03:39:23]RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there?[01:03:39:23 - 01:06:52:16]Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven't actually been figured out in quantum gravity. So that is really the frontier. There's also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who's also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I've taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I've sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it's always some kind of, I don't know, I find physics very deep. I also think a lot about quantum mechanics and it's, it's, it's a completely weird theory that actually nobody really understands. And there's a very interesting story, which is maybe good to tell to connect sort of my PZ back to where I'm now. So I did my PZ with a Nobel Laureate, Gerard the toft. He says the most brilliant man I've ever met. He was never wrong about anything as long as I've seen him. And now he says quantum mechanics is wrong and he has a new theory of quantum mechanics. Nobody understands what he's saying, even though what he's writing down is not mathematically very complex, but he's trying to address this understandability, let's say of quantum mechanics head on. And I find it very courageous and I'm completely fascinated by it. So I'm also trying to think about, okay, can I actually understand quantum mechanics in a more mundane way? So that, you know, without all the weird multiverses and collapses and stuff like that. So the physics is always been the threat and I'm trying to apply the physics to the machine learning to build better algorithms.[01:06:52:16 - 01:07:05:15]Brandon: You are still very involved in understanding and understanding physics and the worlds. Yeah. And just like applications to machine learning or introducing no formalisms. That's really cool.[01:07:05:15 - 01:07:18:02]Max: Yes, I would say I'm not contributing much to physics, but I'm contributing to the interface between physics and science. And that's called AI for science or science or AI is kind of a super, it's actually a new discipline that's emerging.[01:07:18:02 - 01:07:18:19]Speaker 5: Yeah.[01:07:18:19 - 01:07:45:14]Max: And it's not just emerging, it's exploding, I would say. That's the better term because I know you go from investments into like in the hundreds of millions now in the billions. So there's now actually a startup by Jeff Bezos that is at 6.2 billion sheep round. Right. Insane. I guess it's the largest startup ever, I think. And that's in this field, AI for science. It tells you something that we are creating a new bubble here.[01:07:46:15 - 01:07:53:28]Brandon: So why do you think it is? What has changed that has motivated people to start working on AI for science type problems?[01:07:53:28 - 01:08:49:17]Max: So there's two reasons actually. One is that people have been applying sort of the new tools from AI to the sciences, which is quite natural. And there's of course, I think there's two big examples, protein folding is a big one. And the other one is machine learning forest fields or something called machine learning inter-atomic potentials. Both of them have been actually very successful. Both also had something to do with symmetries, which is a little cool. And sort of people in the AI sciences saw an opportunity to apply the tools that they had developed beyond advertised placement, right, or multimedia applications into something that could actually make a very positive impact in society like health, drug development, materials for the energy transition, carbon capture. These are all really cool, impactful applications.[01:08:50:19 - 01:09:42:14]Max: Despite that, the science and the kind of the is also very interesting. I would say the fact that these sort of these two fields are coming together and that we're now at the point that we can actually model these things effectively and move the needle on some of these sort of science sort of methodologies is also a very unique moment, I would say. People recognize that, okay, now we're at the cusp of something new, where it results whether the company is called after. We're at the cusp of something new. And of course that always creates a lot of energy. It's like, okay, there's something, it's like sort of virgin field. It's like nobody's green field. Nobody's been there. I can rush in and I can sort of start harvesting there, right? And I think that's also what's causing a lot of sort of enthusiasm in the fields.[01:09:42:14 - 01:10:12:18]RJ: If you're an AI engineer, basically if the people that listen to this podcast will be in the field, then you maybe don't have a strong science background. How does, but are excited. Most I would say most AI practitioners, BM engineers or scientists would consider themselves scientists and they have some background, a little bit of physics, a little bit of industry college, maybe even graduate school that have been working or are starting out. How does somebody who is not a scientist on a day-to-day basis, how do they get involved?[01:10:12:18 - 01:10:14:28]Max: Well, they can read my book once it's out.[01:10:16:07 - 01:11:05:24]Max: This is basically saying that there is more, we should create curricula that are on this interface. So I'm not sure there is, also we already have some universities actual courses you can take, maybe online courses you can take. These workshops where we are now are actually very good as well. And we should probably have more tutorials before the workshop starts. Actually we've, I've kind of proposed this at some point. It's like maybe first have an hour of a tutorial so that people can get new into the field. There's a lot out there. Most of it is of course inaccessible, but I would say we will create much more books and other contents that is more accessible, including this podcast I would say. So I think it will come. And these days you can watch videos and things. There's a huge amount of content you can go and see.[01:11:05:24 - 01:11:28:28]Brandon: So maybe a follow-up to that. How do people learn and get involved? But why should they get involved? I mean, we have a lot of people who are of our audience will be interested in AI engineering, but they may be looking for bigger impacts in the world. What opportunities does AI for science provide them to make an impact to change the world? That working in this the world of pure bits would not.[01:11:28:28 - 01:11:40:06]Max: So my view is that underlying almost everything is immaterial. So we are focusing a lot on LLMs now, which is kind of the software layer.[01:11:41:06 - 01:11:56:05]Max: I would say if you think very hard, underlying everything is immaterial. So underlying an LLM is a GPU, and underlying a GPU is a wafer on which we will have to deposit materials. Do we want to wait a little bit?[01:12:02:25 - 01:12:11:06]Max: Underlying everything is immaterial. So I was saying, you know, there's the LLM underlying the LLM is a GPU on which it runs. In order to make that GPU,[01:12:12:08 - 01:12:43:20]Max: you have to put materials down on a wafer and sort of shine on it with sort of EUV light in order to etch kind of the structures in. But that's now an actual material problem, because more or less we've reached the limits of scaling things down. And now we are trying to improve further by new materials. So that's a fundamental materials problem. We need to get through the energy transition fast if we don't want to kind of mess up this world. And so there is, for instance, batteries. That's a complete materials problem. There's fuel cells.[01:12:44:23 - 01:13:01:16]Max: There is solar panels. So that they can now make solar panels with new perovskite layers on top of the silicon layers that can capture, you know, theoretically up to 50% of the light, where now we're at, I don't know, maybe 22 or something. So these are huge changes all by material innovation.[01:13:02:21 - 01:13:47:15]Max: And yeah, I think wherever you go, you know, I can probably dig deep enough and then tell you, well, actually, the very foundation of what you're doing is a material problem. And so I think it's just very nice to work on this very, very foundation. And also because I think this is maybe also something that's happening now is we can start to search through this material space. This has never been the case, right? It's like scientists, the normal way of working is you read papers and then you come up with no hypothesis. You do an experiment and you learn, et cetera. So that's a very slow process. Now we can treat this as a search engine. Like we search the internet, we now search the space of all possible molecules, not just the ones that people have made or that they're in the universe, but all of them.[01:13:48:21 - 01:14:42:01]Max: And we can make this kind of fully automated. That's the hope, right? We can just type, it becomes a tool where you type what you want and something starts spinning and some experiments get going. And then, you know, outcome list of materials and then you look at it and say, maybe not. And then you refine your query a little bit. And you kind of do research with this search engine where a huge amount of computation and experimentation is happening, you know, somewhere far away in some lab or some data center or something like this. I find this a very, very promising view of how we can sort of build a much better sort of materials layer underneath almost everything. And also more sustainable materials. Our plastics are polluting the planet. If you come up with a plastic that kind of destroys itself, you know, after, I don't a few weeks, right? And actually becomes a fertilizer. These are things that are not impossible at all. These things can be done, right? And we should do it.[01:14:42:01 - 01:14:47:23]RJ: Can you tell us a little bit just generally about CUSBI and then I have a ton of questions.[01:14:47:23 - 01:14:48:15]Speaker 5: Yeah.[01:14:48:15 - 01:17:49:10]Max: So CUSBI started about 20 months ago and it was because I was worried about I'm still worried about climate change. And so I realized that in order to get, you know, to stay within two degrees, let's say, we would not only have to reduce our emissions to zero by 2050, but then, you know, another half century or even a century of removing carbon dioxide from the atmosphere, not by reducing your emissions, but actually removing it at a rate that's about half the rate that we now emit it. And that is a unsolved problem. But if we don't solve it, two degrees is not going to happen, right? It's going to be much more. And I don't think people quite understand how bad that can be, like four degrees, like very bad. So this technology needs to be developed. And so this was my and my co-founder, Chet Edwards, motivation to start this startup. And also because, you know, we saw the technology was ready, which is also very good. So if you're, you know, the time is right to do it. And yeah, so we now in the meanwhile, we've grown to about 40 people. We've kind of collected 130 million investment into the company, which is for a European company is quite a lot. I would say it's interesting that right after that, you know, other startups got even more. So that's kind of tells you how fast this is growing. But yeah, we are we are now at the we've built the platform, of course, but it's for a series of material classes and it needs to be constantly expanded to new material classes. And it can be more automated because, you know, we know putting LLMs in as the whole thing gets more and more automated. And now we're moving to sort of high throughput experimentation. So connecting the actual platform, which is computational, to the experiments so that you can get also get fast feedback from experiments. And I kind of think of experiments as something you do at the end, although that's what we've been doing so far. I want to think of it as what I would call a sort of a physics processing unit, like a PPU, right, which is you have digital processing units and then you have physics processing units. So it's basically nature doing computations for you. It's the fastest computer known as possible, even. It's a bit hard to program because you have to do all these experiments. Those are quite, quite bulky. It's like a very large thing you have to do. But in a way, it is a computation. And that's the way I want to see it. So I want to you can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you're interested in. And that's the vision we have. We don't say super intelligence because I don't quite know what it means and I don't want to oversell it. But I do want to automate this process and give a very powerful tool in the hands of the chemists and the material scientists.[01:17:49:10 - 01:18:01:02]Brandon: That actually brings up a question I wanted to ask you. First of all, can you talk about your platform to like whatever degree, like explain kind of how it works and like what you your thought processes was in developing it?[01:18:01:02 - 01:20:47:22]Max: Yeah, I think it's been surprisingly, it's not rocket science, I would say. It's not rocket science in the sense of the design and basically the design that, you know, I wrote down at the very beginning. It's still more or less the design, although you add things like I wasn't thinking very much about multi-scale models and as the common are rated that actually multi-scale is very important. And the beginning, I wasn't thinking very much about self-driving labs. But now I think, you know, we are now at the stage we should be adding that. And so there is sort of bits and details that we're adding. But more or less, it's what you see in the slide decks here as well, which is there is a generative component that you have to train to generate candidates. And then there is a digital twin, multi-scale, multi-fidelity digital twin, which you walk through the steps of the ladder, you know, they do the cheap things first, you weed out everything that's obviously unuseful, and then you go to more and more expensive things later. And so you narrow things down to a small number. Those go into an experiment, you know, do the experiment, get feedback, etc. Now, things that also have been more recently added is sort of more agentic sort of parts. You know, we have agents that search the literature and come up with, you know, actually the chemical literature and come up with, you know, chemical suggestions for doing experiments. We have agents which sort of autonomously orchestrate all of the computations and the experiments that need to be done. You know, they're in various stages of maturity and they can be continuously improved, I would say. And so that's basically I don't think that part. There's rocket science, but, you know, the design of that thing is not like surprising. What is it's surprising hard to actually build it. Right. So that's that's the thing that is where the moat is in the data that you can get your hands on and the and actually building the platform. And I would say there's two people in particular I want to call out, which is Felix Hunker, who is actually, you know, building the scientific part of the platform and Sandra de Maria, who is building the sort of the skate that is kind of this the MLOps part of the platform. Yeah. And so and recently we also added sort of Aaron Walsh to our team, who is a very accomplished scientist from Imperial College. We're very happy about that. He's going to be a chief science officer. And we also have a partnerships team that sort of seeks out all the customers because I think this is one thing I find very important. In print, it's so complex to do to actually bring a material to the real world that you must do this, you know, in collaboration with sort of the domain experts, which are the companies typically. So we always we only start to invest in the direction if we find a good industrial partner to go on that journey with us.[01:20:47:22 - 01:20:55:12]Brandon: Makes a lot of sense. Over the evolution of the platform, did you find that you that human intervention, human,[01:20:56:18 - 01:21:17:01]Brandon: I guess you could start out with a pure, you could imagine two directions when you start up making everything purely automatic, automated, agentic, so on. And then later on, you like find that you need to have more human input and feedback different steps. Or maybe did you start out with having human feedback? You have lots of steps and then like kind of, yeah, figure out ways to remove, you know,[01:21:17:01 - 01:22:39:18]Max: that is the second one. So you build tools for you. So it's much more modular than you think. But it's like, we need these tools for this application. We need these tools. So you build all these tools, and then you go through a workflow actually in the beginning just manually. So you put them in a first this tool, then run this to them or this with sithery. So you put them in a workflow and then you figure out, oh, actually, you know, this this porous material that we are trying to make actually collapses if you shake it a bit. Okay, then you add a new tool that says test for stability. Right. Yeah. And so there's more and more tools. And then you build the agent, which could be a Bayesian optimizer, or it could be an actual other them, you know, maybe trained to be a good chemist that will then start to use all these tools in the right way in the right order. Yeah. Right. But in the beginning, it's like you as a chemist are putting the workflow together. And then you think about, okay, how am I going to automate this? Right. For one very easy question you can ask yourself is, you know, every time somebody who is not a super expert in DFT, yeah, and he wants to do a calculation has to go to somebody who knows DFT. And so could you start to automate that away, which is like, okay, make it so user friendly, so that you actually do the right DFT for the right problem and for the right length of time, and you can actually assess whether it's a good outcome, etc. So you start to automate smaller small pieces and bigger pieces, etc. And in the end, the whole thing is automated.[01:22:39:18 - 01:22:53:25]Brandon: So your philosophy is you want to provide a set of specific tools that make it so that the scientists making decisions are better informed and less so trying to create an automated process.[01:22:53:25 - 01:23:22:01]Max: I think it's this is sort of the same where you're saying because, yes, we want to automate, yeah, but we don't see something very soon where the chemists and the domain expert is out of the loop. Yeah, but it but it's a retreat, right? It's like, okay, so first, you need an expert to tell you precisely how to set the parameters of the DFT calculation. Okay, maybe we can take that out. We can maybe automate that, right? And so increasingly, more of these things are going to be removed.[01:23:22:01 - 01:23:22:19]Speaker 5: Yeah.[01:23:22:19 - 01:24:33:25]Max: In the end, the vision is it will be a search engine where you where somebody, a chemist will type things and we'll get candidates, but the chemist will still decide what is a good material and what is not a good material out of that list, right? And so the vision of a completely dark lab, where you can close the door and you just say, just, you know, find something interesting and then it will it will just figure out what's interesting and we'll figure out, you know, it's like, oh, I found this new material to blah, blah, blah, blah, right? That's not the vision I have. He's not for, you know, a long time. So for me, it's really empowering the domain experts that are sitting in the companies and in universities to be much faster in developing their materials. And I should say, it's also good to be a little humble at times, because it is very complicated, you know, to bring it to make it and to bring it into the real world. And there are people that are doing this for the entire lives. Yeah. Right. And it's like, I wonder if they scratch their head and say, well, you know, how are you going to completely automate that away, like in the next five years? I don't think that's going to happen at all.[01:24:35:01 - 01:24:39:24]Max: Yeah. So to me, it's an increasingly powerful tool in the hands of the chemists.[01:24:39:24 - 01:25:04:02]RJ: I have a question. You've talked before about getting people interested based on having, you know, sort of a big breakthrough in materials, incremental change. I'm curious what you think about the platform you have now in are sort of stepping towards and how are you chasing the big change or is this like incremental or is there they're not mutually exclusive, obviously, but what do you think about that?[01:25:04:02 - 01:26:04:27]Max: We follow a mixed strategy. So we are definitely going after a big material. Again, we do this with a partner. I'm not going to disclose precisely what it is, but we have our own kind of long term goal. You could call it lighthouse or, you know, sort of moonshot or whatever, but it is going to be a really impactful material that we want to develop as a proof point that it can be done and that it will make it into the into the real world and that AI was essential in actually making it happen. At the same time, we also are quite happy to work with companies that have more modest goals. Like I would say one is a very deep partnership where you go on a journey with a company and that's a long term commitment together. And the other one is like somebody says, I knew I need a force field. Can you help me train this force field and then maybe analyze this particular problem for me? And I'll pay you a bunch of money for that. And then maybe after that we'll see. And that's fine too. Right. But we prefer, you know, the deep partnerships where we can really change something for the good.[01:26:04:27 - 01:26:22:02]RJ: Yeah. And do you feel like from a platform standpoint you're ready for that or what are the things that and again, not asking you to disclose proprietary secret sauce, but what are the things generally speaking that need to happen from where we are to where to get those big breakthroughs?[01:26:22:02 - 01:28:40:01]Max: What I find interesting about this field is that every time you build something, it's actually immediately useful. Right. And so unlike quantum computing, which or nuclear fusion, so you work for 20, 30, 40 years and nothing, nothing, nothing, nothing. And then it has to happen. Right. And when it happens, it's huge. So it's quite different here because every time you introduce, so you go to a customer and you say, so what do you need? Right. So we work, let's say, on a problem like a water filtration. We want to remove PFAS from water. Right. So we do this with a company, Camira. So they are a deep partner for us. Right. So we on a journey together. I think that the breakthrough will happen with a lot of human in the loop because there is the chemists who have a whole lot more knowledge of their field and it's us who will help them with training, having a new message. And in that kind of interface, these interactions, something beautiful will happen and that will have to happen first before this field will really take off, I think. And so in the sense that it's not a bubble, let's put it that way. So that's people see that as actual real what's happening. So in the beginning, it will be very, you know, with a lot of humans in the loop, I would say, and I would I would hope we will have this new sort of breakthrough material before, you know, everything is completely automated because that will take a while. And also it is very vertical specific. So it's like completely automating something for problem A, you know, you can probably achieve it, but then you'll sort of have to start over again for problem B because, you know, your experimental setup looks very different in the machines that you characterize your materials look very different. Even the models in your platform will have to be retrained and fine tuned to the new class. So every time, you know, you have a lot of learnings to transfer, but also, you know, the problems are actually different. And so, yes, I would want that breakthrough material before it's completely automated, which I think is kind of a long term vision. And I would say every time you move to something new, you'll have to start retraining and humans will have to come in again and say, okay, so what does this problem look like? And now sort of, you know, point the the machine again, you know, in the new direction and then and then use it again.[01:28:40:01 - 01:28:47:17]RJ: For the non-scientists among us, me included a bit of a scientist. There's a lot of terminology. You mentioned DFT,[01:28:49:00 - 01:29:01:11]RJ: you equivariance we've talked about. Can you sort of explain in engineering terms or the level of sophistication and engineering? Well, how what is equivariance?[01:29:01:11 - 01:29:55:01]Max: So equivariance is the infusion of symmetry in neural networks. So if I build a neural network, let's say that needs to recognize this bottle, right, and then I rotate the bottle, it will then actually have to completely start again because it has no idea that the rotated bottle. Well, actually, the input that represents a rotated bottle is actually rotated bottle. It just doesn't understand that. Right. If you build equivariance in basically once you've trained it in one orientation, it will understand it in any other orientation. So that means you need a lot less data to train these models. And these are constraints on the weights of the model. So so basically you have to constrain the way such data to understand it. And you can build it in, you can hard code it in. And yeah, this the symmetry groups can be, you know, translations, rotations, but also permutations. I can graph neural network, their permutations and then physics, of course, as many more of these groups.[01:29:55:01 - 01:30:01:08]RJ: To pray devil's advocate, why not just use data augmentation by your bottle is in all the different orientations?[01:30:01:08 - 01:30:58:23]Max: As an option, it's just not exact. It's like, why would you go through the work of doing all that? Where you would really need an infinite number of augmentations to get it completely right. Where you can also hard code it in. Now, I have to say sometimes actually data augmentation works even better than hard coding the equivariance in. And this is something to do with the fact that if you constrain the optimization, the weights before the optimization starts, the optimization surface or objective becomes more complicated. And so it's harder to find good minima. So there is also a complicated interplay, I think, between the optimization process and these constraints you put in your network. And so, yeah, you'll hear kind of contradicting claims in this field. Like some people and for certain applications, it works just better than not doing it. And sometimes you hear other people, if you have a lot of data and you can do data augmentation, then actually it's easier to optimize them and it actually works better than putting the equivariance in.[01:30:58:23 - 01:31:07:16]Brandon: Do you think there's kind of a bitter lesson for mathematically founded models and strategies for doing deep learning?[01:31:07:16 - 01:31:46:06]Max: Yeah, ultimately it's a trade-off between data and inductive bias. So if your inductive bias is not perfectly correct, you have to be careful because you put a ceiling to what you can do. But if you know the symmetry is there, it's hard to imagine there isn't a way to actually leverage it. But yeah, so there is a bitter lesson. And one of the bitter lessons is you should always make sure your architecture is scale, unless you have a tiny data set, in which case it doesn't matter. But if you, you know, the same bitter lessons or lessons that you can draw in LLM space are eventually going to be true in this space as well, I think.[01:31:47:10 - 01:31:55:01]RJ: Can you talk a little bit about your upcoming book and tell the listeners, like, what's exciting about it? Yeah, I should read it.[01:31:55:01 - 01:33:42:20]Max: So this book is about, it's called Generative AI and Stochastic Thermodynamics. It basically lays bare the fact that the mathematics that goes into both generative AI, which is the technology to generate images and videos, and this field of non-equilibrium statistical mechanics, which are systems of molecules that are just moving around and relaxing to the ground state, or that you can control to have certain, you know, be in a certain state, the mathematics of these two is actually identical. And so that's fascinating. And in fact, what's interesting is that Jeff Hinton and Radford Neal already wrote down the variational free energy for machine learning a long time ago. And there's also Carl Friston's work on free energy principle and active entrance. But now we've related it to this very new field in physics, which is called stochastic thermodynamics or non-equilibrium thermodynamics, which has its own very interesting theorems, like fluctuation theorems, which we don't typically talk about, but we can learn a lot from. And I think it's just it can sort of now start to cross fertilize. When we see that these things are actually the same, we can, like we did for symmetries, we can now look at this new theory that's out there, developed by these very smart physicists, and say, okay, what can we take from here that will make our algorithms better? At the same time, we can use our models to now help the scientists do better science. And so it becomes a beautiful cross-fertilization between these two fields. The book is rather technical, I would say. And it takes all sorts of things that have been done as stochastic thermodynamics, and all sorts of models that have been done in the machine learning literature, and it basically equates them to each other. And I think hopefully that sense of unification will be revealing to people.[01:33:42:20 - 01:33:44:05]RJ: Wait, and when is it out?[01:33:44:05 - 01:33:56:09]Max: Well, it depends on the publisher now. But I hope in April, I'm going to give a keynote at ICLR. And it would be very nice if they have this book in my hand. But you know, it's hard to control these kind of timelines.[01:33:56:09 - 01:33:58:19]RJ: Yeah, I'm looking forward to it. Great.[01:33:58:19 - 01:33:59:25]Max: Thank you very much. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

DACOM Digital
MiCA Masters: Inside a CASP Compliance Journey

DACOM Digital

Play Episode Listen Later Jan 17, 2026 55:59


Coinmotion's CCO Jani Ultamo discusses MiCA licensing, DORA requirements, Travel Rule challenges, and market abuse obligations. Jani shares practical tips for compliance officers, insights from Finland's fast-track licensing process, and why regulatory clarity is everything for the future of crypto in Europe.

California Tree Nut Report
The CASP Program Is Good for the Almond Industry

California Tree Nut Report

Play Episode Listen Later Dec 24, 2025


TuneFM
Words, Music, Time, and Place

TuneFM

Play Episode Listen Later Dec 11, 2025 25:07


UNE lecturers Dr Alana Blackburn and Associate Professor Sarah Lawrence are two of four performers set to present a new composition this weekend.Words, Music, Time, and Place takes the works of four local writers and wordsmiths, and sets them to music composed for those works by local composer Steve Thorneycroft. The production and creation of this new musical work is supported by by CASP and Regional Arts NSW.We caught up with Dr Alana Blackburn and A/Prof Sarah Lawrence to talk about their work, creative process, and experience with this performance.Support the show: https://buymeacoffee.com/tunefmSee omnystudio.com/listener for privacy information.

Rebuilding Arizona Civics
How Participatory Budgeting Builds Civic Power In Arizona Schools

Rebuilding Arizona Civics

Play Episode Listen Later Dec 4, 2025 53:34 Transcription Available


Hand students a real budget and a ballot, and watch a campus transform. We sit down with Tara Bartlett (ASU), KaRa Lyn Thrasher, and Sabrina Estrada (Center for the Future of Arizona) to unpack how school participatory budgeting turns student voice into visible change—without adding noise or partisanship. From the first Arizona pilot to 80+ schools statewide, the story is clear: when students lead, engagement grows, trust deepens, and communities benefit.We break down the complete PB cycle in plain language: forming an inclusive student steering committee, collecting ideas from the whole school, vetting costs and feasibility, building a transparent ballot, campaigning with civil discourse, and running a real vote day complete with booths and “I Voted” stickers. You'll hear vivid examples—water bottle refill stations and AEDs that solved urgent needs, therapy dogs that scaled district-wide, and a Watho shade structure built with tribal partners—that showcase how culture shifts when young people drive decisions.Beyond inspiring stories, we dig into outcomes you can measure. Using a CASP framework—civic knowledge, attitudes, skills, and practices—students report stronger public speaking, teamwork, project management, empathy, and confidence to act. We address common hurdles like educator time, funding myths, and adultism, and share practical solutions: integrate PB into coursework, set aside a budget slice, recruit “not the usual suspects,” and use bite-sized trainings and resource hubs to make facilitation easier.Curious to bring PB to your district or classroom? Explore the toolkit, try the short training videos, and start with a student-led committee and a real line item. If this conversation resonates, follow the show, share it with a colleague, and leave a review telling us what your students would put on the ballot.Check it out: https://www.arizonafuture.org/programs/education-programs/school-participatory-budgeting-in-arizona/ The Arizona Constitution ProjectCheck Out Our Free Lessons on Arizona History and Government!Follow us on:TwitterLinked InInstagramFacebookYouTubeWebsiteInterested in a Master's Degree? Check out the School of Civic and Economic Leadership's Master's in Classical Liberal Education and Leadership

Lave Radio: an Elite Dangerous podcast
Lave Radio Episode 560 - Casp In Front Of Things

Lave Radio: an Elite Dangerous podcast

Play Episode Listen Later Dec 2, 2025 127:16


The Caspian Explorer is out! We've had a good chance to fly it and have thoughts! Also, Ben blows and Colin plays with himself.Development News:Caspian Explorer Update Patch Notes – https://www.elitedangerous.com/update-notes/4-3-0-0Community Corner:Update on sniped systems returned to rightful owners – https://forums.frontier.co.uk/threads/a-number-of-previously-claimed-systems-were-made-available-again.643514/“The Long Break (inspired by CMDR Sulu's Story)” by The High Wake – https://www.youtube.com/watch?v=_Ayb3CKhLjs“This is EDGIS – The Fast Elite Dangerous Explorer Tool” by Elite Dangereuse – https://youtu.be/JKNaHRLdb6s“Privateer – Elite: Dangerous (Flight Assist Off)” by Aitolu – https://youtu.be/MAmoB8di2VEStack Up Charity Stream – https://tiltify.com/+2025-aggressively-helpful/2025-aggressively-helpful

Government Of Saint Lucia
Equity Expands CASP Nationwide, With All 17 Districts To Benefit

Government Of Saint Lucia

Play Episode Listen Later Oct 22, 2025 3:54


Since its introduction in 2009, the Community After School Programme (CASP) has grown from just three pilot communities to a comprehensive national initiative. Now fully present in every district, this expansion marks a historic milestone in strengthening social inclusion and ensuring that children in every community have access to structured after-school learning, mentorship, academic, creative arts, music, theatre, sports, and life skills support. In addition to the nationwide rollout, CASP centres have been relocated to new venues in Anse la Raye and Micoud to better serve the needs of those communities. The programme has also welcomed Bocage as the newest addition, bringing much-needed support to children and families in the area.

The PodCASP
The Ethics of Artificial Intelligence Use in ABA

The PodCASP

Play Episode Listen Later Oct 18, 2025 32:33


Artificial intelligence is rapidly evolving. It's increasingly being integrated into administrative and clinical activities involved in delivering Applied Behavior Analysis.However, AI use has outpaced the development of laws, regulations, and guidelines intended to safeguard their use in healthcare.Rebecca Womack joins the show to discuss CASP's "Practice Parameters for the Ethical Use of Artificial Intelligence in ABA," which provide guidance on payer, regulatory, and ethical matters as well as organizational oversight of AI.Click here to read CASP's AI guidelines.This episode of the PodCASP is sponsored by Apploi.

DACOM Digital
Compliance Champions: South Africa Regulatory Framework

DACOM Digital

Play Episode Listen Later Oct 6, 2025 47:13


Diketso Mashigo of South Africa's FSCA joins Compliance Champions to break down their landmark crypto licensing regime: 470+ CASP applications, 279 licenses granted, and a clear regulatory framework under the FAIS Act.

Big Al & JoJo
09-15-25 Doc Aaron Casp with KOA Sports

Big Al & JoJo

Play Episode Listen Later Sep 15, 2025 3:19 Transcription Available


Don't Change Much
From Breaking Point to Brotherhood in the Trades: Men Confront Suicide, Survival & Hope

Don't Change Much

Play Episode Listen Later Sep 10, 2025 41:15


For many men in the trades, work is more than a job; it’s an identity. But the weight of long hours, money stress, and pressure to “be strong” can become overwhelming, with devastating consequences. In this raw and deeply human episode, host Buzz Bishop sits down with Trevor Botkin (The FORGE), Andrew Perez (Canadian Association for Suicide Prevention), and two men - Mitch Orton and Rob Saloman - who lost a brother or son who worked in the trades. Together, they break the silence around pressure, identity, and the weight of “being the tough guy.” What unfolds is a conversation about survival, brotherhood, and change. From lived experience to national advocacy, each voice shows that while the crisis is real, so is the possibility of hope. You’ll come away with:

California Tree Nut Report
CASP Program in Almonds Gathers Great Data

California Tree Nut Report

Play Episode Listen Later Sep 8, 2025


The Commercial Break
The Alligator Alley 500

The Commercial Break

Play Episode Listen Later Jul 30, 2025 66:00


TCB Merch Drop Happens August 8th, 2025 :⁠ www. shopTCBpocast.com EP801: Bryan & Krissy are back from vacation. Krissy enjoyed some time off with her husband, relaxing by the pool. Bryan spent his week taking kids to urgent care and dodging wannabe NASCAR drivers on Alligator Alley in south Florida. Plus, Terry Bollea is dead. The Hulk has long been gone! Ozzy was the soft, satan loving rockstar we all needed and Hooped Earring passed?? Ok... Then, listener texts are discussed and merch lines are dropped! TCBits: A new CASP director is making (flat) waves! Watch EP #803 on YouTube! Text us or leave us a voicemail: +1 (212) 433-3TCB FOLLOW US: Instagram:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@thecommercialbreak⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Youtube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠youtube.com/thecommercialbreak⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ TikTok: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@tcbpodcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.tcbpodcast.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ CREDITS: Hosts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bryan Green⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ &⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Krissy Hoadley⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Executive Producer: Bryan Green Producer: Astrid B. Green Voice Over: Rachel McGrath TCBits & TCB Tunes: Written, Voiced and Produced by Bryan Green. Rights Reserved To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Commercial Break
The Alligator Alley 500

The Commercial Break

Play Episode Listen Later Jul 30, 2025 75:30


EP801: Bryan & Krissy are back from vacation. Krissy enjoyed some time off with her husband, relaxing by the pool. Bryan spent his week taking kids to urgent care and dodging wannabe NASCAR drivers on Alligator Alley in south Florida. Plus, Terry Bollea is dead. The Hulk has long been gone! Ozzy was the soft, satan loving rockstar we all needed and Hooped Earring passed?? Ok... Then, listener texts are discussed and merch lines are dropped! TCBits: A new CASP director is making (flat) waves! Watch EP #803 on YouTube! Text us or leave us a voicemail: +1 (212) 433-3TCB FOLLOW US: Instagram:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@thecommercialbreak⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Youtube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠youtube.com/thecommercialbreak⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ TikTok: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@tcbpodcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.tcbpodcast.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ CREDITS: Hosts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bryan Green⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ &⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Krissy Hoadley⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Executive Producer: Bryan Green Producer: Astrid B. Green Voice Over: Rachel McGrath TCBits & TCB Tunes: Written, Voiced and Produced by Bryan Green. Rights Reserved To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Noticentro
CDMX convoca a participar en la Copa de Fútbol Infantil Oficial

Noticentro

Play Episode Listen Later Jun 27, 2025 1:12


Exigen que caso de Marco Antonio Suástegui sea atraído por la FGREn Cuajimalpa detienen a mujer rusa por amenazar a su pareja con un macheteEn Argentina, docentes y estudiantes, se manifiestan en defensa de la Universidad PúblicaMás información en nuestro Podcast

Nobel Prize Conversations
John Jumper: Nobel Prize Conversations

Nobel Prize Conversations

Play Episode Listen Later Jun 18, 2025 44:21


”I really love the notion of contributing something to physics.” — Chemistry laureate John Jumper has always been passionate about science and understanding the world. With the AI tool AlphaFold, he and his co-laureate Demis Hassabis have provided a possibility to predict protein structures. In this podcast conversation, Jumper speaks about the excitement of seeing how AI can help us more in the future.Jumper also shares his scientific journey and how he ended up working with AlphaFold. He describes a special memory from the 2018 CASP conference where AlphaFold was presented for the first time. Another life-changing moment was the announcement of the Nobel Prize in Chemistry in October 2024 – Jumper tells us how his life has changed since then. Through their lives and work, failures and successes – get to know the individuals who have been awarded the Nobel Prize on the Nobel Prize Conversations podcast. Find it on Acast, or wherever you listen to pods. https://linktr.ee/NobelPrizeConversations© Nobel Prize Outreach. Hosted on Acast. See acast.com/privacy for more information.

DACOM Digital
MiCA Masters: Technology and Compliance first

DACOM Digital

Play Episode Listen Later Jun 4, 2025 44:45


Patrick Aarikka of Kvarn Capital unpacks the compliance challenges of launching a meme coin under Title II, applying for a CASP license in Finland, and embedding compliance in a technology-first crypto startup. A must-listen for anyone facing MiCA head-on.

Grey Dynamics
Grey Dynamics Presents the OpSec Podcast: A Guide By Former USIC Cyber Contractor

Grey Dynamics

Play Episode Listen Later May 30, 2025 67:14


Welcome back to Grey Dynamics. Today, we are thrilled to announce the OpSec Podcast, a project from our cyber intelligence and operational security expert, which will be produced and edited in-house every couple of weeks. Allen, the show host, is a seasoned intelligence and defence professional with over twenty years of experience, including military service, government contracting and the private sector. Specialising in Intelligence, Surveillance, and Reconnaissance (ISR) collection operations. Allen holds a Master of Science in Cybersecurity and top-tier certifications including CISSP and CASP+. Additionally, his career spans global assignments leading multinational teams and supporting mission-critical programs for the United States military and allied partners. Currently, he serves as a GEOINT advisor for the United States government and an OPSEC specialist in Grey Dynamics team. Find AllenLinkedIn ProfileOpSec PodcastIntel ReportsRelated LinksGrey Dynamics Intelligence Capability Development and TrainingGrey Dynamics Operational SupportGrey Dynamics Open Source Intelligence ServicesGrey Dynamics Case StudiesGrey Dynamics StoryAdvance Your Intelligence Career Today!We are the first fully online intelligence school helping professionals to achieve their long-term goals. Our school with tons of new material is currently under construction and will be out there very The Grey Dynamics Podcast is available on all major platforms!YouTubeSpotifyApple PodcastGoogle PodcastAmazon Podcast Hosted on Acast. See acast.com/privacy for more information.

The PodCASP
Why the CASP Conference is a Cut Above

The PodCASP

Play Episode Listen Later Apr 30, 2025 26:35


In STEPPS Executive Director and CASP Board Member Yvonne Bruinsma joins the PodCASP to discuss the 2025 CASP Conference. She explains how the conferences facilitates strong, lasting relationships; this year's featured sessions on AI ethics and antitrust law; and what goes into putting on a conference of this scope.

Farm City Newsday by AgNet West
AgNet News Hour Thursday, 04-03-25

Farm City Newsday by AgNet West

Play Episode Listen Later Apr 3, 2025 36:42


The Ag Net News Hour's Lorre Boyer and Nick Papagni, “The AgMeter” started out the show by discussing the latest agriculture news, focusing on weather and drought concerns. California is experiencing better rainfall and snowpack levels, but faces water storage issues. The Purdue University/CME Group Ag Economy Barometer fell 12 points to 140, with 43% of farmers citing trade policy as their top concern, surpassing interest rates. The farm capital investment index dropped to 54, while farmland value expectations remained cautiously optimistic. The survey revealed a shift in priorities post-election, with trade policy becoming more important. The hosts debated the impact of tariffs on agriculture, emphasizing the need for a level playing field and the potential long-term benefits despite current uncertainties. In this segment of the show, Nick and Lorrie focused on the Trump administration's federal layoffs and a lawsuit by California's Attorney General Bonta, joined by 20 attorneys, challenging the mass terminations of federal probationary employees. The lawsuit, supported by a temporary restraining order, aims to reinstate employees from 18 federal agencies, including the U. S. Department of Agriculture. The conversation also touched on a proposed bill in Congress, the Honor Farmers Contracts Act, which seeks to unfreeze USDA funding and ensure farmers are reimbursed for contracts. The bill addresses the impact of frozen funding on farmers' investments, particularly in specialty crops. In today's Almond Board of California feature, ABC's Taylor Hillman had an interview on the California Almond Stewardship Platform (CASP) and its new incentive- linking to the NRCS Conservation Stewardship Program (CSP). Michael Roots, Manager of Field Outreach and Education at the Almond Board of California, explained that CSP offers per-acre payments for soil health practices like cover crops and dust protection. The new CASP report simplifies the application process by translating farm practices into NRCS codes. CASP also benefits growers with tools like irrigation and nitrogen calculators, and data sharing with handlers. The segment also touched on the importance of prunes in California, noting that nearly 100% of U.S. prunes are grown there.

Under The Abbey Stand
The Preview Show: Crawley (A)

Under The Abbey Stand

Play Episode Listen Later Feb 28, 2025 41:31


Casp and Walker look ahead to a tricky trip to the creepy Crawlers. It's the hope that kills us and we're riding that wave as United head down south for another must-win game. We're delighted to be sponsored by King Street Cellar, a unique independent wine, beer and spirits merchants in the centre of Cambridge. Use the code UTAS10 to get 10% off, online and in store:https://kingstreetcellar.co.uk/Subscribe below to never miss a pod or post, and get in touch with the pod here:Socials: @AbbeyStandPod and Under The Abbey StandThanks for reading Under The Abbey Stand! Subscribe for free to receive new posts and support our work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.undertheabbeystand.com

DACOM Digital
MiCA Masters: Insights from one of the first MiCA Licensed CASP - OKX Europe

DACOM Digital

Play Episode Listen Later Feb 10, 2025 46:56


Erald Ghoos, CEO of OKX Europe, talks about how OKX became one of the first MiCA-licensed crypto firms. He shares the challenges, regulatory trends, and what's next for crypto compliance in Europe.

The Snake Pit With Rattlesnake Roy
Chad Plunket | The Snake Pit Episode 299

The Snake Pit With Rattlesnake Roy

Play Episode Listen Later Jan 29, 2025 81:55


Chad Plunket is a sculptor, professor, and oversees long-term planning and development as well as day-to-day operations at CASP.Subscribe to Patreon:https://patreon.com/snakepitstudios?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=creatorshare_creator&utm_content=join_linkFollow The Snake Pit :https://www.instagram.com/thesnakepitwrattlesnakeroy/https://www.facebook.com/thesnakepitwithrattlesnakeroyFollow Lords of Film : https://www.instagram.com/lordzoffilm/Follow The Big Pink PP Show : https://www.instagram.com/thebigpinkppshow/Follow Breaking Hyman with Morgan and Friends :https://www.instagram.com/breakinghymanpod/

The Behavioral View
The Behavioral View Episode 4.12: The Future of Behavior Analysis: AI, Technology and Training with Expert Panel

The Behavioral View

Play Episode Listen Later Dec 18, 2024 54:11


This panel discussion examines critical considerations as artificial intelligence and automation technologies become increasingly integrated into behavior analytic practice. Expert panelists explore the implications of AI for clinical decision-making, documentation, training, and equitable service delivery while emphasizing the importance of maintaining human clinical judgment and ethical practice. The discussion highlights both opportunities and risks, providing behavior analysts with key considerations for evaluating and implementing AI tools in their practice while maintaining professional and ethical standards. To earn CEUs for listening, click here, log in or sign up, pay the CEU fee, + take the attendance verification to generate your certificate! Don't forget to subscribe and follow and leave us a rating and review. Show Notes References:   Council of Autism Service Providers. (2024, October 24). CASP announces workgroup to develop artificial intelligence (AI) guidelines [Press release].  Council of Autism Service Providers. (2023). Applied behavior analysis practice guidelines for the treatment of autism spectrum disorder: Guidance for healthcare funders, regulatory bodies, service providers, and consumers (Version 3.0).  Council of Autism Service Providers. (2022). Practice parameters for telehealth-based applied behavior analysis: Special considerations and recommendations for practitioners, funders, and regulators.  Resources:   Behavioral Health Center of Excellence (BHCOE) - www.bhcoe.org  CentralReach - www.centralreach.com  CARI by CentralReach - https://centralreach.com/blog/meet-cari-the-generative-ai-solution-from-the-leader-in-autism-and-idd-care-software/  Council of Autism Service Providers (CASP) - www.casproviders.org  TransformVXR - www.transformvxr.com 

The PodCASP
Revolutionizing ABA Documentation: The Power of CASP's New Session Note Templates with Rebecca O'Shea and Rebecca Womack

The PodCASP

Play Episode Listen Later Nov 15, 2024 51:40


In this episode, Dr. Heather O'Shea, PhD., BCBA-D and Rebecca Womack, M.S., BCBA, LBA discuss CASP's new session note templates. They explain how the templates were developed, how they're designed to align with AMA requirements while supporting medical necessity, and clinicians' role in maintaining high standards. CASP has made the templates available for free, encouraging broad adoption to increase the quality of ABA services. For more information, members and providers are encouraged to join the Documentation Special Interest Group or visit CASP's website (links below).  CASP Session Note Templates For comments and inquiries about the templates, email templates@casproviders.org.  The PodCASP is proudly hosted by The Council of Autism Service Providers.

Empower LEP Collaborative Podcast
Ep 41 | Running a Growing LEP Practice | Business Updates with Jana Parker

Empower LEP Collaborative Podcast

Play Episode Listen Later Oct 11, 2024 19:33


In this solo episode of the Empower LEP Podcast, Jana shares an inside look at the big changes in both her practice and personal life over the past month. From moving into a brand-new office space to handling staffing transitions, Jana gets real about the rollercoaster of running a growing business. She also reflects on the importance of trusting the process, taking risks, and staying committed to a vision—no matter how scary the leap might seem.Jana shares the personal growth she's experienced, from working remotely while away from the office to feeling the support of her incredible team. She talks about how important it is to embrace the uncertainty that comes with entrepreneurship and to trust that everything will work out as long as you stay committed to your vision.This is a great listen if you're curious about what it takes to grow a private practice, or you simply want an honest look at the life of a busy entrepreneur. Plus, Jana gives a sneak peek into her upcoming presentations at the CASP conference and shares a heartfelt tribute to a colleague she recently lost.Tune in for a real and honest conversation about the highs and lows of business ownership, and get inspired to keep pushing forward in your own journey.And don't forget! Right now, Empower LEP is offering more than 25% off of LEP Practice Essentials until the 22nd. This is the ultimate course for LEPs looking to learn everything they need to establish their own private practice. Now's the perfect time to jump in and get the guidance you need to take your career to the next level.Tune in to Episode 41 on Spotify, Apple Podcasts, or YouTube. Be sure to subscribe, rate, and leave a review!Connect with Empower LEP:https://empowerlep.comInstagram: https://www.instagram.com/empowerlepFacebook: https://www.facebook.com/EmpowerLEP/and the Empower LEP Facebook Group https://www.facebook.com/groups/583676341308649The website for this show is https://empowerleppodcast.com/If you enjoyed this episode, please leave a five-star rating and review on your favorite podcast platform. Your support helps us continue to bring you more inspiring stories for LEPs and supporting professionals.

The PodCASP
Introducing Mariel Fernandez!

The PodCASP

Play Episode Listen Later Jul 31, 2024 44:17


Mariel is CASP's newest employee as the VP of Government Affairs. Join the PodCASP team as we listen all about her experience, pulse on the ABA field and goals for the future!

Brain Biohacking with Kayla Barnes
Optimize Your Posture with Ashley Williams, PT, DPT, ATP/SMS, CASp

Brain Biohacking with Kayla Barnes

Play Episode Listen Later Jul 25, 2024 36:31


Today I am speaking with Ashley Williams, PT, DPT, ATP/SMS, CASp on all things posture, office optimization and what we can do to optimize our posture for better health. You can save on Anthros with my link and code! Code KAYLA! Ashley is a Doctor of Physical Therapy and is a representative of Anthros! I have had an Anthros desk chair for years now, and I would never go back to a normal desk chair! This is the only biohacking chair on the market. About Ashley Ashley, Doctor of Physical Therapy, seating specialist, and chair assessment specialist, has worked in the wheelchair seating industry for 15 years. She led a pediatric seating clinic for five years, then taking her expertise to the wheelchair manufacturing industry, excelling in sales and training newer therapists in seating principles. About Anthros A team of wheelchair seating experts—clinicians, product designers, and marketing executives—heard the same comment over and over at industry trade shows: These physical and occupational therapists, seating technicians, gamers, and assistive technology professionals, all people who take care of the most vulnerable sitting population—wheelchair users—got us thinking: What about the millions of people sitting at desks all day, suffering with aches, discomfort, headaches, and pain? What about gamers grinding for hours on end, constantly shifting positions due to discomfort? “I wish my office chair felt like this!”“What about the rest of the world?” Don't they deserve an evidence-based sitting solution? Learn more about Anthros

Empower LEP Collaborative Podcast
Ep 27 | Amy Merenda | Licensed Educational Psychologist | Empower LEP Podcast

Empower LEP Collaborative Podcast

Play Episode Listen Later Jul 5, 2024 49:54


Welcome to Episode 27 of the Empower LEP Podcast! This week, host Jana Parker sits down with Dr. Amy Merenda, a passionate Licensed Educational Psychologist and neurodiversity-affirming advocate. With six years of experience as a public school psychologist and now fully dedicated to her private practice, Amy shares her story of leaving the school districts to go all in on her mission to serve families in her community independently.You'll hear her enthusiasm for this transition as she talks about life changes and the red tape that motivated her to move to private practice ownership. Building connections and creating services that speak to the market, Amy is finding her footing and building a business she is proud of.In this episode, Amy shares her passion for neurodiversity-affirming practices and her journey of promoting these practices, particularly in school psychology, where she discovered a surprising lack of guidance. Fueled by this mission, she developed trainings and authored a paper published in CASP, all aimed at empowering school psychologists to conduct assessments that uplift neurodivergent students. Her work is not just about advocacy but about making a tangible difference in the lives of these students, inspiring us all to do the same.Her dedication to supporting students and families shines through as she explains how her private practice operates on the belief that all brains are beautiful and that it's essential for LEPs to help their clients recognize their unique strengths. This is a must-listen for anyone interested in a positive approach and the messages of the neurodiversity-affirming movement. Join us for an inspiring conversation that will leave you motivated and ready to make a difference in the lives of all students. If you enjoyed this episode, please leave a five-star rating and review on your favorite podcast player. It helps us reach more LEPs and professionals who can learn and grow from our content. Connect with Dr. Amy Merenda:Website: www.amymerenda.comEmail: amyburnsmerenda@gmail.comInstagram: https://www.instagram.com/amymerendalepFacebook: https://www.facebook.com/amyjeanburnsConnect with Empower LEP:https://empowerlep.comInstagram: https://www.instagram.com/empowerlepFacebook: https://www.facebook.com/EmpowerLEP/and the Empower LEP Facebook Group https://www.facebook.com/groups/583676341308649The website for this show is https://empowerleppodcast.com/If you enjoyed this episode, please leave a five-star rating and review on your favorite podcast platform. Your support helps us continue to bring you more inspiring stories for LEPs and supporting professionals.

building empower fueled le p merenda casp leps licensed educational psychologist
The Snake Pit With Rattlesnake Roy
Morgan Kirkpatrick| The Snake Pit Episode 279

The Snake Pit With Rattlesnake Roy

Play Episode Listen Later Jun 13, 2024 68:08


Morgan Kirkpatrick is and advocate for public education, students, and teachers. She is currently running for the Texas State Board of Education for District 15. Come out June 22nd at The CASP studio C for Snake Pit Live!Follow Morgan Kirkpatrick on Instagram : https://www.instagram.com/morgankirkpatrickfortxsboe15/Subscribe to Patreon:https://patreon.com/snakepitstudios?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=creatorshare_creator&utm_content=join_linkFollow The Snake Pit :https://www.instagram.com/thesnakepitwrattlesnakeroy/https://www.facebook.com/thesnakepitwithrattlesnakeroyFollow Lords of Film : https://www.instagram.com/lordzoffilm/

Halsey Mennonite Church
CASP Report - Levi Baer 6-9-2024

Halsey Mennonite Church

Play Episode Listen Later Jun 10, 2024 13:39


Almond Journey
Episode 61: Almond Stewardship with Alicia Rockwell

Almond Journey

Play Episode Listen Later May 23, 2024 20:56


Alicia Rockwell, vice chair of the Almond Board of California and chief government affairs officer at Blue Diamond Growers, joined the Journey to share her perspective on stewardship and advocacy for the almond industry. Rockwell discusses lessons from her work at Blue Diamond and Save Mart Supermarkets, the critical role California Almond Stewardship Program (CASP) self-assessments have played in reputation management, how almonds are leading the way in terms of stewardship, and the importance of engagement and advocacy on behalf of the industry.“The reason that this industry was able to protect its reputation and get out from under the water criticisms was because of those growers that in those early stages, went and did the CASP self-assessment.” - Alicia Rockwell In Today's episode:Meet Alicia Rockwell, vice chair of the Almond Board of California Board of Directors and the Chief Government Affairs Officer at Blue Diamond GrowersExplore Rockwell's journey from the urban bay area to becoming an advocate for both producers and consumers alikeDiscover the thoughtful approach the almond industry has taken and continues to utilize to stay current with consumer trends and policy changeThe Almond Journey Podcast is brought to you by the Almond Board of California. This show explores how growers, handlers, and other stakeholders are making things work in their operations to drive the almond industry forward. Host Tim Hammerich visits with leaders throughout the Central Valley of California and beyond who are finding innovative ways to improve their operations, connect with their communities, and advance the almond industry.ABC recognizes the diverse makeup of the California almond industry and values contributions offered by its growers, handlers, and allied industry members. However, the opinions, services and products discussed in existing and future podcast episodes are by no means an endorsement or recommendation from ABC. The Almond Journey podcast is not an appropriate venue to express opinions on national, state, local or industry politics. As a Federal Marketing Order, the Almond Board of California is prohibited from lobbying or advocating on legislative issues, as well as setting field and market prices.

Catalunya al dia
Catalunya al dia, de 13 a 14 h - 10/05/2024

Catalunya al dia

Play Episode Listen Later May 10, 2024 60:00


MoneywebNOW
WeBuyCars listing: Any value here?

MoneywebNOW

Play Episode Listen Later Apr 4, 2024 20:32


Jimmy Moyaha, an independent analyst, shares insights on Invicta's acquisition of the Nationwide Bearing Company in the UK, and confirmation of the upcoming listing for WeBuyCars scheduled for next Thursday. Gaurav Nair, co-founder of Jaltech, talks about FSCA crypto asset service provider (CASP) licenses, as they are one of the first to obtain one. Guy Krige from ESCROWSURE on new IT risk regulations for SA's financial sector in 2024.

uk listing invicta casp fsca jimmy moyaha
MoneywebNOW
[TOP STORY] FSCA now issuing crypto asset service provider [CASP] licences

MoneywebNOW

Play Episode Listen Later Apr 4, 2024 5:53


‘In the past...there weren't any checks and balances on any of these companies providing crypto products or crypto services': Jaltech co-founder Gaurav Nair.

Bitcoin Italia Podcast
S06E12 - La legge di potenza

Bitcoin Italia Podcast

Play Episode Listen Later Mar 28, 2024 70:14


Nel mondo del Bitcoin si sta discutendo molto della Power Law, o legge di potenza, quella formula matematica applicata in tantissimi ambiti scientifici, che per la prima volta viene applicata anche a Bitcoin. La teoria è del Professor Giovanni Santostasi, astrofisico e bitcoiner, e noi lo abbiamo invitato al BIP SHOW per farcela spiegare.Inoltre: L'Unione Europea NON vieta i wallet p2p,  vi raccontiamo il nuovo regolamento, e Wladimir Van der Laan potrebbe tornare a sviluppare Bitcoin dopo la sconfitta legale di Faketoshi.It's showtime!

The PodCASP
Let's Talk About Rate Negotiation!

The PodCASP

Play Episode Listen Later Feb 26, 2024 28:42


Our very own Jonathan Mueller spills all of his rate negotiation secrets on this episode of The PodCASP. This was a good one but of course we are biased! The PodCASP is proudly hosted by CASP.

The Nonlinear Library
LW - AI's impact on biology research: Part I, today by octopocta

The Nonlinear Library

Play Episode Listen Later Dec 27, 2023 6:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI's impact on biology research: Part I, today, published by octopocta on December 27, 2023 on LessWrong. I'm a biology PhD, and have been working in tech for a number of years. I want to show why I believe that biological research is the most near term, high value application of machine learning. This has profound implications for human health, industrial development, and the fate of the world. In this article I explain the current discoveries that machine learning has enabled in biology. In the next article I will consider what this implies will happen in the near term without major improvements in AI, along with my speculations about how our expectations that underlie our regulatory and business norms will fail. Finally, my last article will examine the longer term possibilities for machine learning and biology, including crazy but plausible sci-fi speculation. TL;DR Biology is complex, and the potential space of biological solutions to chemical, environmental, and other challenges is incredibly large. Biological research generates huge, well labeled datasets at low cost. This is a perfect fit with current machine learning approaches. Humans without computational assistance have very limited ability to understand biological systems enough to simulate, manipulate, and generate them. However, machine learning is giving us tools to do all of the above. This means things that have been constrained by human limits such as drug discovery or protein structure are suddenly unconstrained, turning a paucity of results into a superabundance in one step. Biology and data Biological research has been using technology to collect vast datasets since the bioinformatics revolution of the 1990's. DNA sequencing costs have dropped by 6 orders of magnitude in 20 years ($100,000,000 dollars per human genome to $1000 dollars per genome)[1]. Microarrays allowed researchers to measure changes in mRNA expression in response to different experimental conditions across the entire genome of many species. High throughput cell sorting, robotic multi-well assays, proteomics chips, automated microscopy, and many more technologies generate petabytes of data. As a result, biologists have been using computational tools to analyze and manipulate big datasets for over 30 years. Labs create, use, and share programs. Grad students are quick to adapt open source software, and lead researchers have been investing in powerful computational resources. There is a strong culture of adopting new technology, and this extends to machine learning. Leading Machine Learning experts want to solve biology Computer researchers have long been interested in applying computational resources to solve biological problems. Hedge fund billionaire David E. Shaw intentionally started a hedge fund so that he could fund computational biology research[2]. Demis Hassabis, Deepmind founder, is a PhD neuroscientist. Under his leadership Deepmind has made biological research a major priority, spinning off Isomorphic Labs[3] focused on drug discovery. The Chan Zuckerberg Institute is devoted to enabling computational research in biology and medicine to "cure, prevent, or manage all diseases by the end of this century"[4]. This shows that the highest level of machine learning research is being devoted to biological problems. What have we discovered so far? In 2020, Deepmind showed accuracy equal to the best physical methods of protein structure measurement at the CASP 14 protein folding prediction contest with their AlphaFold2 program.[5] This result "solved the protein folding problem"[6] for the large majority of proteins, showing that they could generate a high quality, biologically accurate 3D protein structure given the DNA sequence that encodes the protein. Deepmind then used AlphaFold2 to generate structures for all proteins kn...

The Quoc Khanh Show
Đầu tư Zero Trust có mang lại ROI hấp dẫn? | Philip Hùng Cao, Chiến lược gia | #TQKS 62

The Quoc Khanh Show

Play Episode Listen Later Dec 24, 2023 73:51


Khách mời của ngày hôm nay là Philip Hùng Cao - Chiến lược gia & nhà truyền bá Zero Trust. Anh Philip (#tekfarmer) là một nhà hoạch định chiến lược, từng giữ vị trí kiến trúc sư giải pháp tại Palo Alto Networks, đồng thời là nhà sáng lập & cố vấn Liên minh Bảo mật đám mây, Việt Nam (#CSAVietnam). Anh Philip sở hữu nhiều chứng chỉ danh giá như ZTX-S, CCISO, CISM, CCSP, CCSK, CASP, GICSP, PCNSE; và có 19 năm kinh nghiệm trong ngành ICT/An ninh mạng tại nhiều lĩnh vực khác nhau.   Theo báo cáo của Forrester, An toàn thông tin không chỉ là giải pháp quản trị rủi ro, mà còn là lợi thế cạnh tranh và tạo giá trị kinh doanh cho doanh nghiệp. Trong tập TQKS hôm nay, anh Philip Hùng Cao chia sẻ về những lý do khiến lãnh đạo doanh nghiệp còn ngần ngại với an ninh thông tin, và những cơ hội kinh doanh đang chờ đón khi doanh nghiệp tiếp cận an toàn thông tin & Zero Trust một cách đúng mực.   Tập này bạn xem gì? 00:00 - Giới thiệu khách mời Philip Hùng Cao 02:21 - Giải pháp Zero Trust là gì? 04:14 - Tại sao lãnh đạo cần bảo toàn an ninh thông tin & Zero Trust? 10:32 - “Chiến lược hành động” để bảo toàn thông tin 12:36 - Tầm ảnh hưởng chiến lược của Zero Trust đối với doanh nghiệp 16:43 - An toàn thông tin & Zero Trust có ảnh hưởng tới hiệu quả kinh doanh như thế nào? 24:04 - Coming Up 24:28 - Tối ưu khoản đầu tư cho an toàn thông tin 26:37 - Đánh giá nhu cầu an ninh mạng của doanh nghiệp  32:05 - Đầu tư cho giải pháp Zero Trust có đáng tiền? 40:46 - Tư duy kế thừa và phát triển nền tảng an ninh mạng hiện hữu 44:53 - Coming Up 45:16 - Vì sao lãnh đạo doanh nghiệp ngại làm an toàn thông tin? 55:24 - An toàn thông tin bắt đầu từ đâu cho SMEs? 58:25 - Zero Trust có đồng nghĩa với “zero risk”? 01:00:47 - Mặt trái của Zero Trust 01:06:40 - Văn hoá Zero Trust trong doanh nghiệp 01:12:40 - Chào kết.   Credit Dẫn chuyện - Host | Quốc Khánh Kịch bản - Scriptwriting | Quốc Khánh Biên Tập – Editor | Atlan Nguyễn Truyền thông - Social |  Ngọc Anh, Cẩm Vân  Sản Xuất -  Producer | Anneliese Mai Nguyen  Trợ lý Sản Xuất - Producer Assistant | Ngọc Huân  Quay Phim - Cameraman | Khanh Trần, Thanh Quang, Nhật Trường, Hải Long  Âm Thanh - Sound | Khanh Trần  Hậu Kỳ – Post Production | Thanh Quang Thiết kế - Design | Nghi Nghi  Trang Điểm - Makeup Artist | Ngọc Nga   #Vietsuccess#TQKS#CyberSecurity #ZeroTrust

Autism Weekly
Profound Autism: Advocacy, Treatment, and a Closer Look at ASF's Impact| with Judith Ursitti #150

Autism Weekly

Play Episode Listen Later Dec 1, 2023 35:45


This week, we are joined by Judith Ursitti who is the Vice President of Government Affairs for CASP, the Council of Autism Service Providers. Formerly a CPA, Judith transitioned into autism advocacy when her son, Jack, was diagnosed at age 2. With over a decade of experience, she's been a driving force behind autism-related legislation in numerous states. Judith's accolades include the Margaret Bauman, MD Award and the Autism Advocacy in Action Award. Today, we'll discuss profound autism, advocacy, and treatment considerations, spotlighting the impactful work of the Autism Science Foundation. Download episode to learn more!! Resources  Council of Autism Service Providers | CASP (casproviders.org) (26) Judith Ursitti, CPA | LinkedIn https://twitter.com/CASProviders Council of Autism Service Providers - CASP | Boston MA | Facebook The Council of Autism Service Providers (@casproviders) • Instagram photos and videos ................................................................ Autism weekly is now found on all of the major listening apps including apple podcasts, google podcasts, stitcher, Spotify, amazon music, and more. Subscribe to be notified when we post a new podcast. Autism weekly is produced by ABS Kids. ABS Kids is proud to provide diagnostic assessments and ABA therapy to children with developmental delays like Autism Spectrum Disorder. You can learn more about ABS Kids and the Autism Weekly podcast by visiting abskids.com.    

The REAL Triathlon Podcast
Dr. Aaron Casp | Your Hips Don't Lie - The Surgery That is Becoming More and More Common

The REAL Triathlon Podcast

Play Episode Listen Later Sep 25, 2023 49:19


Today we sit down and speak with Dr. Aaron Casp about FAI. FAI seems to be all the rage these days in elite and amateur athletes. But what is FAI and how do we treat it?   Dr. Casp is a sports medicine specialist, with advanced training in the treatment of knee, shoulder, and hip injuries. His primary focus is working with active patients of all ages to help get them back to doing what they love. He completed his sports medicine fellowship at the world-renowned Steadman Clinic and Steadman Philippon Research Institute in Vail, Colorado. During this fellowship, he treated elite athletes from the high school level to the NFL, NHL, NBA, and MLS. He traveled to Austria to provide medical coverage to the U.S Ski Team athletes, and was also responsible for the care of athletes from all over the world at the FIS Snowboard and Freeski World Championships in Aspen, CO.   Check out the Real Triathlon Squad online store here for all the best products we use or the RTS Club Store for RTS branded clothing!   If you want to go above and beyond consider supporting us over on Patreon by clicking here!   Follow us on Instagram at @realtrisquad for updates on new episodes.    Individual Instagram handles: Garrick Loewen - @loeweng Nicholas Chase - @race_chase Jackson Laundry - @jacksonlaundrytri  

Autism Outreach
#139: Autism and Insurance Coverage- A Discussion with Lorri Unumb

Autism Outreach

Play Episode Listen Later Aug 29, 2023 22:57


My guest today, Lorri Unumb needs no introduction. Lorri is a mother of three, an autism mom, a lawyer, an autism advocate, and an absolute dynamo in the field. Be sure to check out her long list of achievements in the guest bio for this episode!We are talking about something so important and impactful for families everywhere who have an autism diagnosis: insurance. In 2003 when Lorri's youngest, Ryan, was diagnosed ABA was not a covered treatment even though it was evidence based and crucial to his opportunity to reach his highest potential. Full time ABA therapy for Ryan's needs would run $70,000+ annually. And they weren't the only family dealing with this.  While Lorri's family could make it work, paying full price for therapy was not ideal and for some families this would simply not be possible. So in 2005 she got to work writing a bill that would require insurance coverage for all evidence based autism treatments to include ABA. And after a 2 year journey what became known as “Ryan's Law” was passed in 2007.  Autism Speaks reached out to employ Lorri and she then spent the next decade replicating this law across the country, finally passing in the 50th state in 2019.Lorri also shares about her role in the Council for Autism Service Providers, a collaborative organization of providers working and learning together from across the country. And as an autism mom herself she has some great advice for parents facing new diagnosis: “It gets better”.#autism #speechtherapyWhat's Inside:How insurance for autism treatment has changed in the last 20 years.What is Ryan's Law?The impact of high cost and uncovered autism treatment for families.What is the Council of Autism Service Providers?Advice for autism parents.Mentioned In This Episode:Council of Autism Service ProvidersThe Autism Law SummitLearn more about the ABA SPEECH Connection CEU Membership and Joint Attention on September 12th at 8-9pm eastern (https://us02web.zoom.us/j/835049703570) and September 13th at 8-9pm eastern. (https://us02web.zoom.us/j/86200908099)

The Behavioral Observations Podcast with Matt Cicoria
The Evidence Base for ABA Interventions: Session 235 with Jane Howard and Gina Green

The Behavioral Observations Podcast with Matt Cicoria

Play Episode Listen Later Aug 11, 2023 94:35


Drs. Jane Howard and Gina Green join me today in a podcast that could've spanned several hours. In the time we had, we did manage to cover quite a bit of territory, including the following: What Gina has been up to since retiring from the Association for Professional Behavior Analysts (spoiler alert: she's not hanging out at the beach reading mystery novels). We talk about Jane's career in behavior analysis, including how she got into the field, some of her many, many accomplishments (including being recently honored as a Fellow of the Association for Behavior Analysis International), and what she is working on these days. The basics of research design, including why some experimental questions lend themselves to certain designs over others. In this segment, we also cover group or between-subjects designs and meta analyses, which are relevant to understand when looking at the ABA outcome literature.  The distinction of criterion vs. norm referenced assessments. We discussed a number of initiatives and resources in the general realm of ABA treatment, including the current state of licensure, The ABA Coding Coalition, The Autism Commission on Quality, & CASP. We talked at length about critical thinking, healthy skepticism, and epistemology in Behavior Analysis.  In addition to these topics, we probably spent the most time talking about the empirical support for ABA interventions for individuals with Autism. In doing so, we discussed the large research projects that Jane and Gina led, what to make of some of the criticisms of this literature that is starting to gain some notoriety, and what research questions we still need answers to.  Jane and Gina mentioned numerous studies and resources, and I've done my best to catalog them below: Session 21 (my first interview with Gina in 2017). Howard, J., Sparkman, C., Cohen, H., Green, G, & Stanislaw, H. (2005). A comparison of intensive behavior analytic and eclectic treatments for young children with autism. doi.org/10.1016/j.ridd.2004.09.005 Howard, J. S., Stanislaw, H., Green, G., Sparkman, C. R., & Cohen, H. G. (2014). Comparison of behavior analytic and eclectic early interventions for young children with autism after three years.  Stanislaw, H., Howard, J., & Martin, C. (2020). Helping parents choose treatments for young children with autism: A comparison of applied behavior analysis and eclectic treatments.  Eldevik, S., Hastings, R. P., Hughes, J. C., Jahr, E., Eikeseth, S., & Cross, S. (2010). Using participant data to extend the evidence base for intensive behavioral intervention for children with autism.  Klintwall, L., Eldevik, S., & Eikeseth, S. (2015). Narrowing the gap: Effects of intervention on developmental trajectories in autism.  Padilla, K.L., Weston, R., Morgan, G.B., Lively, P., & O'Guinn, N. (2023). Validity and reliability evidence for assessments based in applied behavior analysis: A systematic review.  Steinbrenner, J. R., Hume, K., Odom, S. L., Morin, K. L., Nowell, S. W., Tomaszewski, B., Szendrey, S., McIntyre, N. S., Yücesoy-Özkan, S., & Savage, M. N. (2020). Evidence-Based Practices for Children, Youth, and Young Adults with Autism. The University of North Carolina at Chapel Hill, Frank Porter Graham Child Development Institute, National Clearinghouse on Autism Evidence and Practice Review Team.  ABA Coding Coalition (2022). Model Coverage Policy for Adaptive Behavior Services. https://abacodes.org/wp-content/uploads/2022/01/Model-Coverage-Policy-for-ABA-01.25.2022.pdf American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014) Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association. https://www.testingstandards.net/uploads/7/6/6/4/76643089/standards_2014edition.pdf Behavior Analyst Certification Board & Association of Professional Behavior Analysts (February 2019). Clarifications Regarding Applied Behavior Analysis Treatment of Autism Spectrum Disorder: Practice Guidelines for Healthcare Funders and Managers (2nded). https://cdn.ymaws.com/www.apbahome.net/resource/collection/1FDDBDD2-5CAF-4B2A-AB3F-DAE5E72111BF/Clarifications.ASDPracticeGuidelines.pdf Johnston, Pennypacker, and Green (2019). Strategies and Tactics for Behavioral Research and Practice. This podcast is brought to you by: The Michigan Autism Conference, which is taking place on October 11-13 in Kalamazoo, and online as well. We'll hear more about this event later on in the show, but if you're impatient like me, to go michiganautismconference.org, and use the code MAC10 to save $10 at checkout. The Stone Soup Conference, which is taking place on October 20th. Use code PODCAST to save on your registration as well. HRIC Recruiting. Barb Voss has been placing BCBAs in permanent positions throughout the US for just about a decade, and has been in the business more generally for 30 years. When you work with HRIC, you work directly with Barb, thereby accessing highly personalized service. So if you're about to graduate, you're looking for a change of pace, or you just want to know if the grass really is greener on the other side, head over to HRIColorado.com to schedule a confidential chat right away. ACE Approved CEUs from .... Behavioral Observations. That's right, get your CEUs while driving, walking your dog, doing the dishes, or whatever else you might have going on, all while learning from your favorite podcast guests!  

The PodCASP
CASP Conference Breakout- Jonathan, Judith, Lacey and Kristin

The PodCASP

Play Episode Listen Later Jun 6, 2023 20:37


PodCASP Hosts, Jonathan and Judith are joined by Lacey Selle (Pediatric Therapy Clinic, Inc.) and Kristin Hanson (Axis Therapy Centers) and discuss the following: Important updates to the Practice Guidelines and the impact to payor requirements The burden of requiring re-evals/updated diagnostic reports The impact of CASP/community within Lacey and Kristin's organizations Rate re-negotiation advice for providers Lacey- Pediatric Therapy Clinic, Inc: https://www.ptcbillings.com/ Kristin- Axis Therapy Centers: https://axistherapycenters.com Reach out to Judith for anything advocacy related at advocacy@casproviders.org!

conference breakout casp practice guidelines
The PodCASP
CASP Member Feature- Courtney Wright, M.Ed., JD, BCBA, LBA

The PodCASP

Play Episode Listen Later May 30, 2023 33:44


On this week's episode, The PodCASP team is joined by fellow CASP member, Courtney Wright.  Courtney currently serves as CEO and General Counsel at Children's Autism Center. Courtney came to CAC in December 2012 after leaving her position as partner at a law firm in Dallas. After doing a great deal of volunteer work for autism awareness in Dallas and being inspired by her mother, Phyllis, Courtney decided to make a career change to work with children with Autism. She went back to school and became a Board-Certified Behavior Analyst in 2014. Courtney has experience with children of all ages diagnosed with ASD. Prior to working at CAC, Courtney had extensive legal experience in the insurance industry, something which she now uses to help families navigate the insurance process. Courtney discusses her journey to CASP, the value that collaboration brings her organization and her perspective as a member over the years.  Thank you to Courtney for joining us for this episode!  And a special thank you to BlueSprig for their sponsorship of this episode! To learn more visit: https://www.bluesprigautism.com/ The PodCASP is edited by Ike Ndolo and brought to you by The Council of Autism Service Providers.  

The Behavioral Observations Podcast with Matt Cicoria
Value-Based Care in ABA: Session 196 with Amanda Ralston

The Behavioral Observations Podcast with Matt Cicoria

Play Episode Listen Later Aug 15, 2022 50:12


If you care about the future of ABA, it's important to understand not only its strengths, but also the myriad challenges the field faces. And to that end, I can't think of a more difficult challenge the field of Applied Behavior Analysis has right now than figuring out how to adequately measure outcome quality, and how this relates to funding ABA services.  My guest for Session 196 is Amanda Ralston, and she's been thinking a lot about these issues for quite some time, and she was kind enough to spend some an hour with me to share her thoughts.  As you'll learn in this episode, Mandy has been in the ABA field for over 20 years, and has experience founding and operating a large, statewide ABA provider, consulting with large multi-state ABA organizations, and much more. Mandy most recently founded NonBinary Solutions, which she talks about briefly. We discuss the current model of insurance reimbursement, and contrast that with what's referred to as Value-Based Pay or Value-Based Care. These payment models differ considerably from the current Fee-For-Service arrangements that most listeners are likely familiar with.  While Behavioral Observations is not a health-care policy podcast, I was encouraged to explore this topic by some friends and confidants, largely because this treatment model may be coming our way at some point. Given that Behavior Analysis is not a mature field as of 2022 - especially when it comes to funding our services - I thought it would be a good idea to explore the topic.  If you experience this conversation the same way I did, I think you'll come to the realization that there are more questions than answers when it comes to Value-Based Care (many of which are articulated in this short video), so I may return to this topic from time to time as things develop.  Here are some links to resources we discussed: NonBinary Solutions website, Instagram. Mandy's personal website, with links to related topics of interest. Council of Autism Service Providers (CASP). Behavioral Health Center of Excellence, plus Session 163 with Sara Litvak and Dr. Ellie Kazemi (discussion of standards, etc... in ABA Practice). The International Consortium for Health Outcomes Measurement (ICHOM). A series of Forbes articles on VBC. Session 196 is brought to you by the following: The 2022 Stone Soup Conference! Great speakers, great cause, all for a great price. October 22nd (or later if you're busy that day). Come hear from Kirk Kirby, Drs. Camille Kolu, Nasiah Cirincione-Ulezi, Merrill Winston, Holly Gover, Tom Higbee, and Florence DiGennaro-Reed. HRIC Recruiting. Barb Voss has been placing BCBAs in permanent positions throughout the US for just about a decade, and has been in the business more generally for 30 years. When you work with HRIC, you work directly with Barb, thereby accessing highly personalized service. So if you're about to graduate, you're looking for a change of pace, or you just want to know if the grass really is greener on the other side, head over to HRIColorado.com to schedule a confidential chat right away. Behavior University. Their mission is to provide university quality professional development for the busy Behavior Analyst. Learn about their CEU offerings, including their brand new 8-hour Supervision Course, as well as their RBT offerings over at behavioruniversity.com/observations. The University of Cincinnati Online. UC Online designed a Master of Education in Behavior Analysis program that is 100% online and asynchronous, meaning you log on when it works for you. Want to learn more? Go to online.uc.edu and click the “request info” button.