POPULARITY
Categories
Angus Taylor is facing pressure from his party, a sociologist discusses the stark difference between pro-Palestinian and Iranian protests, and Drew Pavlou talks about why he thinks he was deported from the US. See omnystudio.com/listener for privacy information.
Season two of the Husker247 Nebraska Baseball Podcast kicks off with a quick look at the start of Nebraska's 2026 baseball season. The Huskers are set to get underway this weekend in Arizona and the pod takes a look at a couple of keys for the Big Red as the season gets rolling. In the second half of the podcast, Nebraska head coach Will Bolt joins to discuss the team ahead of the start of the season. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
Dave, Josh, and Mario are live inducting the Hall Of Fame class of 2026 for the Bolt Crew Podcast. Plus the can't miss guys in the 2026 NFL Draft Class.
Is Angus Taylor the right person to lead the Liberal Party? Barnaby Joyce and political experts discuss the ongoing Coalition chaos. Plus, conservative parties across Europe are surging in polls.See omnystudio.com/listener for privacy information.
Angus Taylor quits frontbench ahead of potential leadership spill, One Nation is creeping up on Labor, and an activist doubles down on her pro-Palestinian protest chant.See omnystudio.com/listener for privacy information.
Ok, I don't like ready made, bolt on necks. I may be wrong but it's my opinion and I can't be silenced :) I welcome your feedback friends. This week we all get together and talk about CBGs whick is what we love doing the most. Thank you to CBGitty and KILLER STRINGS for supporting the podcast and You Tube channel. Thank you also, to all those builders who have been supporting the show by telling their friends and using the CBGitty affiliate link which helps us out immensely and allows me to keep it going! You can use the attached affiliate link to receive 10% off the price of your first 3 orders with CBGitty. https://www.cbgitty.com/?ref=birdwood Darren 'Grumpy' McDonald and Joe Oltean from CLUTCH CREATIONS can be contacted via the Facebook Group and Jesse Thomas from HUMMINGBIRD GUITARS can be found at www.hummingbirdguitarsbyjessethomas.myshopify.com You can order KILLER STRINGS in Australia and see what I've been building at.. www.birdwoodguitars.com www.killerstrings.com.au Thanks for listening! Adam Harrison
Pro-Palestinian protesters clash violently with police. Plus, should a famous Australian of the year be stripped of her title following a disgusting chant at yesterday's ugly protests.See omnystudio.com/listener for privacy information.
Music Matters host Darrell Craig Harris catches up with viral outlaw country artist Travis Bolt from his home in East Texas to talk about his viral country hit "Never Tried Cocaine" and his journey in dealing with Tourettes as a busy recording and touring artist! About Travis Bolt East Texas-born singer/songwriter Travis Bolt's outlaw country sound isn't just a genre, it's his lifestyle. His music is the soundtrack of nights spent around the classic Harley-Davidson motorcycles he loves to work on and tear up back roads with."I write real songs for real people" 'Blues At My Funeral' - Out Now! ‘Burning Bridges' - Out March 6th! www.linktr.ee/travisboltmusic About Music Matters with Darrell Craig Harris The Music Matters Podcast is hosted by Darrell Craig Harris, a globally published music journalist, professional musician, and Getty Images photographer. Music Matters is now available on Spotify, iTunes, Podbean, and more. Each week, Darrell interviews renowned artists, musicians, music journalists, and insiders from the music industry. Visit us at: www.MusicMattersPodcast.comFollow us on Twitter: www.Twitter.com/musicmattersdh For inquiries, contact: musicmatterspodcastshow@gmail.com Support our mission via PayPal: www.paypal.me/payDarrell voice over intro by Nigel J. Farmer
This week, Alex is joined by the "Mr. Worldwide" of Marvel Snap: Dara! They kick things off with a life update, discussing Dara's move from NYC to Australia and now Thailand, living the $3 Bolt ride life while navigating a 12-hour time difference.They dive straight into the Star-Lord Season Pass review. While Alex thinks it's a solid 4-Star card, Dara drops a massive hot take: Star-Lord might be better than Shou-Lao due to his insane synergy with Fin Fang Foom and Grandmaster.Then, they roast the Super Premium card, Magus, agreeing it is "Toxic Doxy" levels of bad and a hard skip (1 Star). They also review Moon Dragon, deciding it's a "Doom 2099" dependent card that falls flat if not played on Turn 2. On the flip side, Dara claims Drax (Avatar of Life) is the best non-Season Pass card of the month, praising its ability to counter Ramp decks.Finally, they open the Mailbag to discuss a wholesome community letter and debate a spicy game design question: Should Marvel Snap introduce a 5-Turn Game Mode to create a true Aggro meta? Plus, a heavy dose of nostalgia as they reminisce about Warcraft III tower rushes and the "Golden Era" of Blizzard.Join Alex Coccia and special guest Dara as they chat about this and more on this episode of The Snap Chat—and catch Cozy and Alex every week as they discuss all things Marvel Snap.Have a question or comment for Cozy and Alex? Send them a Text Message.You've been listening to The Snap Chat. Keep the conversation going on x.com/ACozyGamer and x.com/AlexanderCoccia. Until next time, happy snapping!
What if the best product decision is saying “no” to what everyone else is building?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Alexander Danilowicz, founder and CEO of Magic Patterns, to unpack why his AI prototyping tool is the only one refusing to add backend features—even when competitors like Lovable, Bolt, and v0 are racing in that direction. Alex explains how focusing exclusively on front-end code leads to higher quality prototyping, why many use cases don't actually need a database, and how product teams at large companies can't risk connecting production data to prototyping tools anyway.They explore what it takes to maintain conviction when investors, customers, and the entire market seem to be moving the opposite way. Alex shares how using your own product daily keeps you honest about what's actually broken, why real user feedback looks different from “fake” feature requests (like “add dark mode”), and how a strong co-founding relationship helps you resist temptation when external pressure mounts.If you're a product leader wrestling with feature requests that don't align with your vision, trying to figure out when to follow the market versus when to trust your gut, or building tools in the AI coding space, this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox
Thousands of protesters meet in Sydney amid the Israeli President's visit, a Liberal MP delves into the Coalition trainwreck as Pauline Hanson surges in the polls, and the Super Bowl performance divides the US. See omnystudio.com/listener for privacy information.
What if mental clarity, emotional regulation, and better sleep weren't about adding another practice—but undoing a hidden one?In this conversation, Patrick McKeown reveals how chronic over-breathing quietly drives anxiety, rumination, poor sleep, and brain fog. Drawing from decades of research and lived experience, he explains why breathing less (not more) can improve oxygen delivery, blood flow to the brain, and nervous system balance.This episode challenges modern breathwork myths and offers practical, science-backed ways to retrain your breathing for everyday life.Show Partners:Get your MENTAL FITNESS BLUEPRINT here! A special thanks to our mental fitness + sweat partner Sip SaunasPersonal Socrates: Better Question, Better LifeConnect with Marc: https://konect.to/marcchampagneTimestamps:00:00 — The question that opens every interview: “Who are you?”01:20 — Living out of the head vs. living life03:10 — How stress, sleep, and breathing patterns intersect05:00 — Discovering breath as a path to presence07:40 — Why The Power of Now actually worked10:15 — Walking away from the corporate world12:30 — The origins of the Buteyko Method14:40 — Why breathing more air can reduce oxygen delivery17:10 — Nasal breathing and brain function19:50 — Rumination, CO₂, and cerebral blood flow22:30 — Why slow breathing isn't always good breathing25:10 — Everyday breathing vs. breathwork sessions28:00 — Practical exercise: calming the nervous system32:10 — Clearing a blocked nose naturally36:40 — Breathing for performance and public speaking41:30 — How to retrain your breath throughout the day46:00 — Measuring progress: the BOLT score & breath mastery50:10 — Final reflections on calm, clarity, and control*Special props
Kid dives in with myofunctional therapist Alex Clinton, the founder of Building Healthy Faces on how nasal breathing and tongue posture shape facial development at every age. They unpack mewing, nasal hygiene routines, the BOLT score, and how her Building Healthy Faces approach helps parents start with breathing first, then structure, for better sleep, focus, and smiles.Connect with Alex ClintonWebsite → functionaloralhealth.caWebsite → buildinghealthyfaces.comBe featured on The Kid Carson ShowCollapse time on your growth NOW. Step into a premium interview experience, and create content for your business with Kid Carson.Learn more:
John Howard blasts One Nation and Barnaby Joyce hits back, a report to the US Congress asks if Australia can be trusted with the new nuclear submarines it's building for us. Plus, new technology to design your own IVF baby.See omnystudio.com/listener for privacy information.
We're continuing our AI Tools series with Marcos Polanco, engineering leader, founder, and ecosystem builder from the Bay Area, who joins Matt and Moshe to introduce CLEAR, his method for using AI to build real software, not just demos. Drawing on decades in software development and his recent research into how AI is reshaping the way teams ship products, Marcos shares how CLEAR gives both technical and non‑technical builders a production‑oriented way to work with vibe coding tools.Instead of treating AI like a magical black box, Marcos frames it as an “idiot savant”: incredibly capable and eager, but with no judgment. CLEAR wraps that raw power in structure, guardrails, and engineering discipline, so founders and PMs can go from prototype to production while keeping humans in control of the last, hardest 20%.Join Matt, Moshe, and Marcos as they explore:Marcos's journey through engineering, founding, and AI research, and why he created CLEARWhy AI tools like Bolt, Cursor, Claude, and Gemini are fabulous for prototypes but risky for production without a methodCLEAR in detail:C – Context: onboarding AI like a new hire, using stories and behavior‑driven design (BDD) to articulate requirementsL – Layout: breaking work into focused, scoped pieces and choosing a tech stack so AI isn't overwhelmedE – Execute: applying test‑driven development (TDD), writing tests first, then having AI write code to pass themA – Assess: using a second, independent LLM as a QA agent, plus a human‑run 5 Whys to fix root causes upstreamR – Run: shipping to users, gathering new data, and feeding it back into the next iteration of contextHow CLEAR lowers cognitive load for both humans and AIs and reduces regressions and hallucinationsWhy Markdown (with diagrams like Mermaid) is becoming Marcos's standard format for shared human–AI documentationHow CLEAR changes the coordination layer of software development while keeping engineers central to quality and judgmentPractical advice for PMs and founders who want to move from “just vibes” to predictable, production‑grade AI developmentAnd much more!Want to go deeper on CLEAR or connect with Marcos?CLEAR on GitHub: https://github.com/marcospolanco/ai-native-organizations/blob/main/CLEAR.mdCLEAR slides: https://docs.google.com/presentation/d/1mwwDtr7cCP5jLUyNVgGR5Aj-MBq8xsMlhSc0pvSQDks/edit?usp=sharingLinkedIn: https://www.linkedin.com/in/marcospolancoYou can also connect with us and find more episodes:Product for Product Podcast: http://linkedin.com/company/product-for-product-podcastMatt Green: https://www.linkedin.com/in/mattgreenproduct/Moshe Mikanovsky: http://www.linkedin.com/in/mikanovskyNote: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
US Ambassador to Israel Mike Huckabee talks about whether Australia should join Trump's 'Board of Peace', the deep split between the Nationals and Liberals, and celebrity reputations continue to tarnish following the release of the Epstein files. See omnystudio.com/listener for privacy information.
Topics covered in this episode: django-bolt: Faster than FastAPI, but with Django ORM, Django Admin, and Django packages pyleak More Django (three articles) Datastar Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: django-bolt : Faster than FastAPI, but with Django ORM, Django Admin, and Django packages Farhan Ali Raza High-Performance Fully Typed API Framework for Django Inspired by DRF, FastAPI, Litestar, and Robyn Django-Bolt docs Interview with Farhan on Django Chat Podcast And a walkthrough video Michael #2: pyleak Detect leaked asyncio tasks, threads, and event loop blocking with stack trace in Python. Inspired by goleak. Has patterns for Context managers decorators Checks for Unawaited asyncio tasks Threads Blocking of an asyncio loop Includes a pytest plugin so you can do @pytest.mark.no_leaks Brian #3: More Django (three articles) Migrating From Celery to Django Tasks Paul Taylor Nice intro of how easy it is to get started with Django Tasks Some notes on starting to use Django Julia Evans A handful of reasons why Django is a great choice for a web framework less magic than Rails a built-in admin nice ORM automatic migrations nice docs you can use sqlite in production built in email The definitive guide to using Django with SQLite in production I'm gonna have to study this a bit. The conclusion states one of the benefits is “reduced complexity”, but, it still seems like quite a bit to me. Michael #4: Datastar Sent to us by Forrest Lanier Lots of work by Chris May Out on Talk Python soon. Official Datastar Python SDK Datastar is a little like HTMX, but The single source of truth is your server Events can be sent from server automatically (using SSE) e.g yield SSE.patch_elements( f"""{(#HTML#)}{datetime.now().isoformat()}""" ) Why I switched from HTMX to Datastar article Extras Brian: Django Chat: Inverting the Testing Pyramid - Brian Okken Quite a fun interview PEP 686 – Make UTF-8 mode default Now with status “Final” and slated for Python 3.15 Michael: Prayson Daniel's Paper tracker Ice Cubes (open source Mastodon client for macOS) Rumdl for PyCharm, et. al cURL Gets Rid of Its Bug Bounty Program Over AI Slop Overrun Python Developers Survey 2026 Joke: Pushed to prod
Kyle Crooks sits down with Nebraska head baseball coach Will Bolt to preview the upcoming 2026 season. Opening night for the Huskers will be on February 13th in Arizona.
Pauline Hanson joins the program to talk about One Nation's star candidate, Matthew Canavan discusses Labor's budget blowouts, and an expert provides insight into China's plans for Antarctica. See omnystudio.com/listener for privacy information.
Celebrities caught out in damning new Epstein file releases, an economist discusses Australia's inflation woes, and the price of one Teal independent's rental comes under question.See omnystudio.com/listener for privacy information.
Isaiah 40:1-8
本節新聞重點: 1.輝達擬設台灣第二總部? 黃仁勳:非常有機會2.台64線司機丟包乘客遭輾亡 叫車平台Bolt發聲明致歉3.豐康禽流感蛋售台中7業者 也流入北市、苗栗4.英中展開商業往來 川普警告"非常危險"...等
Barnaby Joyce joins the program to discuss the leadership woes of his old party, other favourite guests talk about all things Trump and Iran, and Australia's immigration laws come under the microscope once again. See omnystudio.com/listener for privacy information.
Affiliates:Use promo code BOLTBROS on Sleeper and get 100% match up to $100! https://Sleeper.com/promo/BOLTBROS. Terms and conditions apply. #SleeperThe Los Angeles Chargers have made a major splash by hiring Chris O'Leary as their new Defensive Coordinator for the 2026 season! After a strong 2025 campaign under Jim Harbaugh, the Bolts are doubling down on defensive excellence by bringing in the highly respected coach known for his aggressive, attacking schemes and player development. O'Leary, previously the linebackers coach and co-defensive coordinator at Michigan during Harbaugh's tenure, brings championship pedigree and a proven track record of turning units into elite forces. We break down what this hire means for stars like Khalil Mack, Tuli Tuipulotu, Derwin James Jr., and the entire Chargers defense—could this be the missing piece for a legitimate Super Bowl run in 2026? Get the full story, background on O'Leary's philosophy, key highlights from his Michigan days, and expert analysis on how this move elevates the Bolts in the loaded AFC West!AFC West Roundtablehttps://www.youtube.com/@AFCWestRoundtableLinks:https://www.Beacons.ai/boltbroshttps://www.riverslake.org/Merch!https://nflshop.k77v.net/Ry9ymXhttps://www.boltbros.live/merch#lachargers #chargers #nfl #boltup #shorts #memes #meme #justinherbert #jimharbaugh #nflfootball #Chargers #ChrisOLeary #DefensiveCoordinator #JimHarbaugh #LosAngelesChargers #NFL2026 #BoltUp #ChargersDefense #NFLHiring #Football #NFLNews #AFCWest #SportsAnalysis #ChargersNation #DefensiveScheme
The guys recap their pre-show trip to The Bolt in El Segundo to meet new Chargers OC Mike McDaniel. Flip Top Story of the Day. Secret Textoso RoundupSee omnystudio.com/listener for privacy information.
MDJ Script/ Top Stories for January 28th Publish Date: January 28th Commercial: From the BG Ad Group Studio, Welcome to the Marietta Daily Journal Podcast. Today is Wednesday, January 28th and Happy Birthday to Jermaine Dye I’m Keith Ippolito and here are the stories Cobb is talking about, presented by Times Journal Local student Mathletes to compete in Cobb County Math Contest Support Cobb law enforcement and get a state tax credit Lawmakers push transparency in school board public comments All of this and more is coming up on the Marietta Daily Journal Podcast, and if you are looking for community news, we encourage you to listen and subscribe! BREAK: INGLES 9 STORY 1: Local student Mathletes to compete in Cobb County Math Contest Cobb County’s middle school math whizzes are gearing up for the local MATHCOUNTS competition on Feb. 28 at Marietta High School. Organized by the Cobb County Chapter of the Georgia Society of Professional Engineers, the event will feature teams from Dickerson, Dodgen, and Hightower Trail middle schools. These students have been prepping since fall—hours of practice, problem-solving, and probably a few late-night algebra sessions. The competition includes both individual and team rounds, with topics like geometry, probability, and statistics. Oh, and there’s a fast-paced oral round too—no pressure, right? Winners will snag prizes and move on to the state finals on March 9 in Buford. MATHCOUNTS, a national program, aims to spark a love for math in middle schoolers—because let’s face it, this is the age where kids either embrace math or start running from it. With 50,000 students competing nationwide this year, it’s a big deal. For details, check out www.mathcounts.org. STORY 2: Support Cobb law enforcement and get a state tax credit Tax season is here, and if you live in Cobb County, there’s a way to support local law enforcement and get a state income tax credit. Thanks to the 2022 LESS Crime Act (short for Law Enforcement Strategic Support Act), Georgia taxpayers can donate to approved public safety foundations and get a dollar-for-dollar credit on their state taxes. Here’s the deal: individuals can donate up to $5,000, couples filing jointly can give $10,000, and corporations can contribute up to 75% of their state tax liability. Statewide, there’s a $75 million cap, and each foundation can accept up to $5 million annually. The process? Register with the Georgia Tax Center, wait for approval, and send your donation within 60 days. Funds go toward training, equipment, officer wellness, and community programs. In Cobb, you can donate to: Cobb Sheriff’s Foundation Acworth Police Community Foundation Cobb County Public Safety Foundation Kennesaw Public Safety Foundation Marietta Police Foundation For links and details, visit their websites. STORY 3: Lawmakers push transparency in school board public comments Cobb County lawmakers are pushing for more transparency in school board meetings with House Bill 989, which would require public comments to be broadcast or recorded if the rest of the meeting is aired. Rep. David Wilkerson said it’s about consistency: “If you’re showing the meeting, show all of it. Don’t cut out the tough parts.” The bill comes after Cobb’s school board stopped broadcasting public comments last year, sparking backlash from parents and lawmakers. Critics called it censorship; the board cited liability concerns. Rep. Solomon Adesanya said public comments are crucial for oversight: “If you only hear one side, you control the narrative.” The bill has bipartisan support, with Rep. Jordan Ridley also signing on. “Transparency matters,” he said. “If you’re broadcasting, show the good, bad, and everything in between.” Meanwhile, Ridley floated the idea of an independent audit for Cobb schools, similar to one he championed in Cherokee County. Cobb school board Chair Randy Scamihorn defended the district, saying claims of a lack of transparency are “absolutely false.” Still, he invited lawmakers to review their processes, adding, “No organization is perfect.” We have opportunities for sponsors to get great engagement on these shows. Call 770.799.6810 for more info. We’ll be right back. Break: INGLES 9 STORY 4: Cobb opens $24M joint police, sheriff firing range Cobb County just unveiled its shiny new $24 million firing range, and let’s just say—it’s a game-changer. Sheriff Craig Owens and Police Chief Dan Ferrell cut the ribbon Friday morning, joined by the Board of Commissioners, a crowd of officers, and deputies. The 65,000-square-foot facility, located next to the Public Safety Training Academy in Austell, replaces the old outdoor range that had been around for over 30 years. That one? It had a strict 8 p.m. curfew because of nearby neighborhoods. Now? Training can happen 24/7. The range features three separate areas, including a 100-yard precision range, and a high-tech 360-degree targeting system for realistic drills. Officers can train in low-light, no-light, and even less-lethal scenarios. Paid for with SPLOST funds, the range is a long-term investment in public safety—and a big win for Cobb County. STORY 5: Northwest Georgia voters to head to polls March 10 for federal and, now, state election Northwest Georgia voters are in for a political doubleheader on March 10. Not only will they pick a new state senator, but they’ll also decide if the former holder of that Senate seat, Colton Moore, should head to Congress. Here’s the backstory: Rep. Marjorie Taylor Greene resigned in January with a year left in her U.S. House term, triggering a special election for District 14. Moore, who represented Senate District 53 (Catoosa, Chattooga, Dade, Walker, and part of Floyd counties), stepped down mid-January to join the crowded race for Greene’s seat—22 candidates, to be exact. Qualifying for Moore’s old Senate seat runs Jan. 29 to Feb. 2. Voter registration closes Feb. 9, with early voting starting Feb. 16. If no one wins outright, expect a runoff on April 7. Buckle up, northwest Georgia—it’s going to be a busy ballot. Break: STORY 6: Chris Carr talks public safety in Cobb Georgia Attorney General Chris Carr didn’t hold back when he spoke to the Cobb County Republican Women’s Club on Friday. Public safety, he said, isn’t just about stopping crime—it’s about supporting law enforcement, tackling mental health, and improving education. And now, as a candidate for governor, he’s making his case. Carr highlighted his record: creating units to fight human trafficking, gangs, opioids, and organized retail crime. “Keeping people safe is the most basic job of government,” he said. “If families don’t feel safe, we’ve failed.” He shared staggering numbers—over 200 children rescued from trafficking, 115 gang members convicted—and warned about the fentanyl crisis, calling it a “war” fueled by Mexican cartels. His office recently seized 15 pounds of the drug, enough to kill millions. On education, he stressed the importance of literacy by third grade and slammed “woke progressivism” in schools. “Our kids aren’t social experiments,” he said. “Schools should teach reading, writing, and math—not radical ideology.” Mental health? Another priority. Carr called for more facilities statewide, saying jails shouldn’t double as treatment centers. He also floated limiting phones in high schools, blaming social media for worsening students’ mental health. When asked about gambling, Carr stood firm against casino betting, citing addiction concerns. On minors accessing pornography, he tied it to human trafficking and expressed fears about AI being used to exploit kids. Former Cobb GOP Chair Rose Wing praised Carr’s tough stance on drug cartels and said she believes he’d make a “great governor.” STORY 7: Woodstock native Bolt named assistant golf coach at KSU Abigail Bolt, a former Woodstock High School star, is heading back to familiar turf—this time as the new assistant women’s golf coach at Kennesaw State. Owls head coach Ket Vanderpool, who worked with Bolt for three seasons at Georgia State, made the announcement Friday. Bolt, who played collegiate golf at Appalachian State from 2017-21, brings a mix of coaching chops and on-course expertise. At Georgia State, she helped lead the team to nine top-five finishes and four tournament wins. Before that? She honed her skills at Towne Lake Hills Golf Club, running junior clinics and managing tournaments. As a player, Bolt was a standout at Appalachian State, earning MVP honors her senior year and finishing with a 77.81 stroke average. Since graduating in 2021, she’s stayed active in the game, competing in amateur events and continuing to build her career in golf. We’ll have closing comments after this. Break: INGLES 9 Signoff- Thanks again for hanging out with us on today’s Marietta Daily Journal Podcast. If you enjoy these shows, we encourage you to check out our other offerings, like the Cherokee Tribune Ledger Podcast, the Marietta Daily Journal, or the Community Podcast for Rockdale Newton and Morgan Counties. Read more about all our stories and get other great content at www.mdjonline.com Did you know over 50% of Americans listen to podcasts weekly? Giving you important news about our community and telling great stories are what we do. Make sure you join us for our next episode and be sure to share this podcast on social media with your friends and family. Add us to your Alexa Flash Briefing or your Google Home Briefing and be sure to like, follow, and subscribe wherever you get your podcasts. Produced by the BG Podcast Network Show Sponsors: www.ingles-markets.com See omnystudio.com/listener for privacy information.
It's been a huge few weeks for the electric vehicle industry — at least in North America.After a major trade deal, Canada is set to import tens of thousands of new electric vehicles from China every year, and it could soon invite a Chinese automaker to build a domestic factory. General Motors has also already killed the Chevrolet Bolt, one of the most anticipated EV releases of 2026.How big a deal is the China-Canada EV trade deal, really? Will we see BYD and Xiaomi cars in Toronto and Vancouver (and Detroit and Seattle) any time soon — or is the trade deal better for Western brands like Volkswagen or Tesla which have Chinese factories but a Canadian presence? On this week's Shift Key, Rob talks to Greig Mordue, a former Toyota executive who is now an engineering professor at McMaster University in Hamilton, Ontario, about how the deal could shake out. Then he chats with Heatmap contributor Andrew Moseman about why the Bolt died — and the most exciting EVs we could see in 2026 anyway.Shift Key is hosted by Robinson Meyer, the founding executive editor of Heatmap, and Jesse Jenkins, a professor of energy systems engineering at Princeton University. Jesse is off this week.Mentioned: Canada's new "strategic partnership” with ChinaThe Chevy Bolt Is Already Dead. Again.The EVs Everyone Will Be Talking About in 2026--This episode of Shift Key is sponsored by …Heatmap Pro brings all of our research, reporting, and insights down to the local level. The software platform tracks all local opposition to clean energy and data centers, forecasts community sentiment, and guides data-driven engagement campaigns. Book a demo today to see the premier intelligence platform for project permitting and community engagement.Music for Shift Key is by Adam Kromelow. Hosted on Acast. See acast.com/privacy for more information.
This episode of Turn Down for What dives into the latest news from the world of electric vehicles!
"There is a slight echo in the audio, and we apologize for the inconvenience. A full transcript is available if you prefer to read instead of listen."What if the key to resolving your client's chronic inflammation, metabolic dysfunction, or hormone imbalance was a neural switch hidden in the body? In this episode of ReInvent Healthcare, Dr. Ritamarie sits down with “The Vagus Nerve Doc,” Dr. Navaz Habib, to expose how vagus nerve activation regulates the immune system, improves digestion, and acts as the true foundation for healing. If your clients aren't getting results, even with the perfect diet or plan, this could be why.What's Inside This Episode?The most overlooked switch that determines whether healing can even beginWhy the vagus nerve is more than just a “relaxation nerve” and what it controlsA deep dive into the neuroimmune connection (and how it explains autoimmune flares, chronic fatigue, and more)The role of HRV, CO₂ tolerance, and breathwork in rewiring the nervous systemWhat the 80/15/5 rule reveals about the direction of vagus nerve communicationHow to assess vagus tone with the BOLT score, no labs neededWhy even GLP-1 and satiety hormones rely on vagus nerve signalingA practical breathwork roadmap, plus the best timing for real resultsBonus: how Dr. Ritamarie's son hacked an early HeartMath device on a plane!Resources and Links:Download the Full Transcript HereDownload our FREE Guide to Adrenal Support Join the Next-Level Health Practitioner Facebook group Visit INEMethod.com for advanced practitioner tools and trainingCheck out other podcast episodes: ReInvent HealthcareGuest Resources and Links:Checkout Dr Habib's website DrNavazHabib.com and HealthUpgraded.com
Former prime minister Scott Morrison says Muslim imams should have to have a licence to preach. Plus, the NSW premier and health minister won't say why a Jewish woman at Liverpool hospital had to be disguised with an Anglo name.See omnystudio.com/listener for privacy information.
Ugly scenes during invasion day protests, questions raised as to whether a record breaking heatwave is global warming, and a hospital changes the name of a Jewish Bondi victim. See omnystudio.com/listener for privacy information.
Jason William Johnson, PhD, Founder of SoundStrategist, is driven by two lifelong passions: creating and teaching. Through SoundStrategist, Jason designs AI-powered learning experiences and intelligent coaching systems that blend music, gamification, and experiential learning to drive real skill development and engagement for enterprises and entrepreneur support organizations. We explore Jason's journey as a musician, educator, and business coach, and how he fused those disciplines into an AI-first company. Jason shares his AI for Deep Experts Framework, showing how subject-matter experts can identify an industry pain point, envision a solution, brainstorm with AI, leverage AI tools to build it, and go after high-value impact—turning deep expertise into scalable products and platforms without needing to be technical. He also explains how AI accelerates research and product design, how “vibe coding” enables rapid MVP development, and why focusing on high-value B2B impact creates faster traction with less complexity. — Turn Your Expertise Into Software with Jason W. Johnson Good day, dear listeners. Steve Preda here, the Founder of the Summit OS Group, developing the Summit OS Business Operating System. And my guest today is Jason William Johnson, PhD, the Founder of SoundStrategist. His team designs AI-powered learning experiences and deploys intelligent coaching systems for enterprises and entrepreneur support organizations blending music, gamification, and experiential learning to drive real skill development and engagement. Jason, welcome to the show. Thanks for having me, Steve. I’m excited to have you and to learn about how you blend music and learning and all that together. But to start with, I’d like to ask you my favorite question. What is your personal ‘Why’ and how are you manifesting it in your business? I would say my personal ‘Why’ is creating and teaching. Those are my two passions. So when I was younger, I was always a creative. I did music, writing, and a variety of other things. So I was always been passionate about creating, but I’ve also been passionate about teaching. I've been informally a teacher for my entire adult life—coaching, training. I've also been an actual professor. So through SoundStrategist, I’m kind of combining those two passions: the passion for teaching and imparting wisdom, along with the passion for creating through music, AI-powered experiences, gamification, and all of those different things. So I'm really in my happy place.Share on X Yeah, sounds like it. It sounds like you're very excited talking about this. So this is quite an unusual type of business, and I wonder how do you stumbled upon this kind of combination, this portfolio of activities and put them all into a business. How did that come about? So Liam Neeson says, “I have a unique combination of skills,” like in Taken. I guess that's kind of how I came up with SoundStrategist. I've pretty much been in music forever. I've been a musician, songwriter, producer, and rapper since I was a child. My father was a musician, so it was kind of like a genetic skill that I kind of adopted and was cultivated at an early age. So I was always passionate about music. Then got older, grew up, got into business, and really became passionate about training and educating. So I pretty much started off running entrepreneurship centers. My whole career has been in small business and economic development. SoundStrategist was a happy marriage of the two when I realized, oh, I can actually use rap to teach entrepreneurship, to teach leadership skills, and now to teach AI and a variety of other things.Share on X So pretty much it was just that fusion of things. And then when we launched the company, it was around the time ChatGPT came out. So we really wanted to make sure we were building it to be AI-first. At first, we were just using AI in our business operations, but then we started experimenting with it for client work—like integrating AI-powered coaches in some of the training programs we were running and things like that. And that really proved to be really valuable, because one of the things I learned when I was running programs throughout my career was you always wanted to have the learning side and the coaching side. Because the learning side generalizes the knowledge for everybody and kind of level-sets everybody.Share on X But everybody’s business, or everybody’s situation, is extremely unique, so you need to have that personalized support and assistance. And when we were running programs in the entrepreneurship centers I were running and things like that, we would always have human coaches. AI enabled us to kind of scale coaching for some of the programs we’re building at SoundStrategist through AI. So with me having been a business coach for over 15 years, I knew how to train the AI chatbots. It started off as simple chatbots, and now it's evolved into full agents that use voice and all those other capabilities. But it really started as, let's put some chatbots into some of our courses and some of our programs to kind of reinforce the learning, personalize it, and then it just developed from there. Okay, so there's a lot in there, and I'd like to unpack some of it. When you say use rap to teach, I’m thinking about rap is kind of a form of poetry. So how do you use poetry, or how do you use rap to teach people? Is it more catchy if it is delivered in the form of a rap song? How does it work? So you kind of want to make it catchy. Our philosophy is this: when you listen to it, it should sound like a good song.Share on X Because there’s this real risk of it sounding corny if it's done wrong, right? So we always focus on creating good music first and foremost when we’re creating a music-based lesson. So it should be a good song. It should be something you hear and think, oh, between the chorus and the music, this actually sounds good. But then, the value of music is that once you learn the song, you learn the concept, right? Because once you memorize the song, you memorize the lyrics, which means you memorize the concept. One of the things we also make sure to do is introduce concepts. The best way I could describe this is this, and this might be funny, but I grew up in the nineties, and a lot of rappers talked about selling dr*gs and things like that. I never sold dr*gs in my life. But just by listening to rap music and hearing them introduce those concepts, if I ever decided to go bad, I would have a working theory, right? So the same thing with entrepreneurship, and the same thing with business principles. You can create songs that introduce the concepts in a way where if a person's never done it, they're introduced to the vocabulary.Share on X They’re introduced to the lived experiences. They’re introduced to the core principles. And then they can take that, and then they can go apply it and have a working theory on how to execute in their business. So that’s kind of the philosophy that we took, let’s make it memorable music, but also introduce key vocabulary. Let’s introduce lived experiences. Let’s introduce key concepts so that when people are done listening to the song, they memorize it, they embody it, and they connect with it. Now they have a working theory for whatever the song is about. And are you using AI to actually write the song? No, we're not. That’s one of the things we haven’t really integrated on the AI front, because the AI is not good enough to take what’s exactly in my head and turn it into a song. It’s good for somebody who doesn’t have any songwriting capability or musical capability to create something that’s cool. But as a musician, as somebody who writes, you have a vision in your head on how something should sound sonically, and the AI is not good enough to take what’s in my head and put it into a song. Now, what we are using are some of the AI tools like Suno for background music. So at first, we used that to create all our background music for our courses from scratch. We are using some of the AI to help with some of the background music and everything and all of that so that we can have original stuffShare on X as opposed to having to use licensed music from places like Epidemic Sound. So we are using it for like the background music. But for the actual music-based lessons, we're still doing those old school. Okay, that's pretty good. We are going to dive in a little bit deeper here, but before we go there, I’d like to talk about the framework that you’re bringing to the show. I think we called it the AI for Deep Experts Framework. That's the working title right now, but yeah, we're still finalizing it. But that’s the working title. Yeah. But the idea—at least the way I'm understanding it—is that if someone has deep domain expertise, AI can be a real accelerator and amplifier of that expertise. Yep. So people who are listening to this and they have domain expertise and they want to do AI so that they can deliver it to more people, reach more people, create more value, what is the framework? What is the five-step framework to get them there? Number one: provided that you have deep expertise, you should be able to identify a core pain point in your respective industry that needs solving.Share on X Maybe it’s something that, throughout your career, you wanted to solve, but you weren’t able to get the resources allocated to get it done in your job. Or maybe it required some technical talent and you weren’t a developer, or whatever, right? But you should be able to identify what’s the pain point, a sticking pain point that needs to be solved—and if it's solved, it could really create value for customers. That's just old-school opportunity recognition. Number two: now, the great thing about AI is that you can leverage AI to do a lot of deep research on the problem. So obviously, you're still going to have conversations to better understand the pain point further. You're going to look at your own lived experiences and things like that. But now you can also leverage AI tools—using Perplexity or Claude—to do deep research on a market opportunity. So whether or not you have experience in market research, you can use an AI tool to help identify the total addressable market. You can brainstorm with it to uncover additional pain points, and it help you flesh out your value proposition, your concept statement, and all of those things that are critical to communicating the offering. Because before we transact in money, we always transact in language, right? So pretty much, AI can help you articulate the value proposition, understand the pain point, all of those different things. And then also if you have like deep expertise and you haven't really turned it into a framework, the AI can help you framework it and then develop a workflow to deliver value.Share on X So now you have the framework, you have the market understanding, and all of those different things. AI can even help you think through what the product would look like—the user experience, the workflow, things like that. Now you can use the AI-powered tool to help you build that. You can use something like Lovable. You can use something like Bolt. You could use something like Cursor, all different AI-powered tools. For people who are newer to development and have never done development before, I would recommend something like Lovable or maybe Bolt. But once you get more comfortable and want to make sure you're building production-ready software, then you move to something like Cursor. Cursor has a large enough context window—the context window is basically the memory of an AI tool. It has a large enough context window to deal with complex codebases. A lot of engineers are using it to build real, production-ready platforms. But for an MVP, Bolt and Lovable are more than good enough. So one of the things I recommend when building with one of these tools is to do what's called a PRD prompt. PRD stands for Product Requirements Document.Share on X For those who aren’t familiar with software development, typically, and this is not even really happening anymore, but traditionally with software development, you would have the product manager create a Product Requirements Document. So this basically outlines the goals of the platform, target audience, core features, database, architecture, technology stack, all of the different things that engineers would need to do in order to build the platform. So you can go to something like Claude, or ChatGPT, and you can say: “Create a PRD prompt for this app idea,” and then give as much detail as possible—the features, how it works, brand colors, all of those different things. Then the AI tool—whether you're using ChatGPT, Claude, or Gemini—will generate your PRD prompt. So it’s going to be like this really, really long prompt. But it’s going to have all of the things that the AI tool, web-building or app-building tool needs to know in order to build the platform. It’s going to have all the specifications. So you copy and paste. Is this what people call vibe coding? Yeah, this is vibe coding. But the PRD prompt helps you become more effective at vibe coding because it gives the AI the specifications it needs and the language that it understands to increase the likelihood that you build your platform correctly. Because once you build the PRD prompt, the AI is going to know, okay, this is the database structure. It's going to know whether this is a React app versus a Next.js app. It's going to know, okay, we're building a frontend with Netlify. The stuff that you may not know, the AI will know, and it will build the platform for that. So then you take that prompt, you paste it into Lovable, paste it into Cursor, and then you can kind of get into your vibe coding flow. Don't let the hype fool you, though, because a lot of people will say, “Oh, I built this app in 15 minutes using Lovable.” No—it still requires time. But if you can build a full-stack application in two weeks when it typically takes several months, that’s still like super fast. So pretty much, on average, you can build something in a couple of weeks—especially once you get familiar with the process, you can build something in a couple of weeks. But if this is your first time ever doing this, pay attention to things like when the app debugs and some of the other issues that come up. Start paying attention because you’re going to learn certain things by doing. As you go through the process, you'll begin to understand things like, okay, this is what an edge function is, this is what a backend is. You’ll start learning these different things as you’re going through the process, right? So you get the platform built. Now the next step is you want to distribute the platform. So obviously, if you’ve been in your industry for a while and you have some expertise, you should have some distribution. You should have some folks in your space who are your ICP that you can kind of start having some customer conversations with and start trying to sell the platform. One of the things that I always recommend is going B2B and selling something for significant valueShare on X as opposed to going B2C and selling a bunch of $19.99 subscriptions. And the reason for that is a couple of different things. Number one, when you have to do a lot of volume, your business model becomes more complicated. And then you have to introduce things to manage that volume. Whereas if you’re selling a solution that’s a five-figure to six-figure offering, like 10 clients, 15 clients, the amount of money that you can get to with less complexity in your business model. So I always say go B2B, at least a five-figure annual offering, because I know most of the offerings that we offer are at least high five figures, low six figures—subscriptions, SaaS licensing, or whatever. And that way it just introduces less complexity to your business model, and it allows you to get as much revenue as possible. And then as you go to market, you’re going to learn. So the learning aspect, you’re going to learn maybe customers want this or this feature. We thought the people were going to use the platform this way, but they’re actually using it this way. So you’re always learning, always evolving, and adjusting the offering. Okay, so let's say I have deep expertise in some area—maybe investment banking or whatever. I want to use AI. I identify an industry pain point that I've addressed or maybe I personally experienced. I visualize a solution, then I brainstorm with ChatGPT or Claude or whatever, figure out what to do, and then I leverage AI tools like Cursor, Lovable, or Bolt. I set the price point. I go B2B. Is this something that, as a subject-matter expert, is efficient for me to do myself because I have the expertise and the vision? Or is it better for me to hire someone to do this? It depends on what your bandwidth is. I mean, pretty much I’m of the firm belief that like these are skills that you probably want to unlock anyway. So it might be worth going through the process of learning the tools, leveraging them, and everything, and all of that. And that’s kind of how you future-proof yourself. Now, obviously, if you have bandwidth limitations, there are firms and organizations that you could hire, et cetera, et cetera, that can do it for you. Obviously, developers and things like that. But the funny thing about a lot of developers is, even though they're using AI, they're still charging the prices they charged before AI, right? They’re just getting it done faster, and their margins are a lot lower. So you're still going to pay, in a lot of instances, developer pricing for a platform. Those are the things that you have to consider as far as your own personal situation. But me personally, I believe these are skills worth unlocking.Share on X Because one of the things is, if you get very senior in your career—let's say you've been there 15, 16 years, 20 years—we all know there's this point where you either move up to the C-suite or you get caught in upper-middle-management purgatory, where you're kind of in that VP, senior director space, et cetera, et cetera, and you just kind of hover there. At that point, your career moves tend to be lateral—going from one VP role to another VP role, one senior director role to another senior director role, right? At that point, your income potential starts to get limited. So unlocking one of these skills and becoming more entrepreneurial is something I genuinely believe is worth developing personally. And what would you say is the time requirement for someone to get competent in vibe coding? Three months minimal. You could be pretty solid in three months. But three months full-time or three months part-time? Three months part-time. So three months. That's about 143 working hours in a regular month. So that's basically around 420–430 hours if you were full-time. If you spend weekends working on your project, learning how to build it, taking notes, and actually going through the process, you can get pretty decent in a couple of months. Now, obviously, there are still levels as you continue and to progress and things like that, but you can get pretty solid in a couple months. Another thing you want to consider is who you're selling to. You obviously wanna make sure that your platform security is really well, is really done. So even if you build it yourself and then you have an engineer do code review, that’s cheaper than having them build it. I think if you spend three months, you can get really good at building solutions for what you need to get done. And then from there, you just get better and better and better and better. How do I know that, let's say I hire someone in Serbia to do a code review for me? Let's say I learn the vibe coding thing and create the prototype, then I have someone to clean the code. How do I know that they did a good job or not? You really don’t. You really don’t know until the platform’s in the wild, and it’s like, okay, it’s secure. So there are some things that you can do to check behind people. Let's say you don't have the money to do a full security audit or hire someone specifically for a security review, a developer for security review. One of the things that you can do is you can do multi-agent review. Like you take your codebase, have Claude review it, have OpenAI Codex review it, have a Cursor agent review it. You have multiple agents do a review. Then they kind of check each other’s work, if you will. They kinda identify things that others may not have identified, so you can get the collective wisdom of those three to be able to be like, “Okay, I need to shore this up. I need to fix this. I need to address that.” That gives you more confidence. It still doesn’t replace a person who has deep expertise and making sure they build secure code, but it will catch common issues, like hard-coding API keys, which is a risk, right? It’ll catch those type of things that typically happen. But let’s say you do have a security, a code review, you could just kind of take that same approach also to check their work. Because they shouldn’t find any major vulnerabilities. The AI agents that come in after it shouldn’t really find any major vulnerabilities if it was like done securely securely. Another thing to consider is that a lot of these tools use Supabase for the backend and database. Supabase also has a built-in security advisor, including an AI security advisor, that points out security issues, performance problems, and configuration errors. So like you do have some AI-powered check and balances to check behind people.Share on X Interesting. So basically, I can audit their applications, and the AI will check the code and tell me what needs to be improved? Yeah. And they can make the fixes for you. Yeah. Wow, that’s amazing. It still sounds a little bit overwhelming. It’s basically a language, a new language to learn, isn’t it? It’s not really — it’s English. That’s the amazing thing about it—it’s English. I mean, you literally talk to AI in natural language, and it builds stuff for you, which is, if somebody is like, had a idea for a minute, because I mean, pretty much running entrepreneurship centers, I’ve known so many people who’ve had ideas that they were never able to launch or build, and then they see somebody build it later. If you learn these skills, you get to the point where anything that's in your head, you can kind of start bringing it to life in reality.Share on X And even if you've got to bring somebody in to make sure it's secure and production-ready, it's way cheaper than having them build it from scratch. And then another thing that you’ll find also is if you’re able to build something, let’s say you want to turn it into a startup or something, right? It’s a lot easier to bring in a technical co-founder when they don’t got to build the thing from scratch, and then they also see that you were able to build something, they’re able to see your product vision, et cetera, et cetera. It becomes a lot more easier to recruit people who actually have that expertise into the company because you’ve already handled the hard part. You got something and it works. And all they got to do is just come in, make it safe, and make it work better. Yeah, that is very interesting. It feels analogous to writing a book yourself or having a ghostwriter. Because essentially, you are vibe coding with a ghostwriter, right? You tell the stories, and then the ghostwriter writes the book for you. Probably now you can use AI to do that. Yep. But that's a skill. Not everyone has the skill to write it themselves, and then they need to go to the ghostwriter, but still is their book, right? Yep. So it sounds a little bit similar. That’s fascinating. So what’s the path to launching an MVP? So let’s say I’m a subject matter expert, and I want to launch an MVP within a few weeks. Is there a path for me to go there? Once you get good with the platform, once you get comfortable with the tools, yeah. So for example, we're launching an AI platform. It's an AI coaching platform, but it's also a data analytics platform. Basically, it's targeted to entrepreneur support organizations and municipalities supporting small businesses. So on the front end, it's an AI-powered advisor — it's a hotline that people can call 24/7. But on the back end, the municipalities and entrepreneur support organizations get access to analytics from each of those calls. We built this in two weeks. We’re already talking to customers, we’re already having conversations, and all of those things. We literally brought it to market in two weeks. So the thing is, once you kind of get caught up with the tools—and I'm not a developer, I'm not a developer by trade at all. I had a tech startup before, but I was a non-technical founder. I just know how to put together a product. But once you get good with the tools, that's very conceivable. And then you just go out there, and you go in the market, you start having conversations with your ideal customer profile.Share on X As you’re going through that process, you’re learning, okay, maybe this isn’t my ideal customer profile, this is their pain point. Or maybe instead of this being the feature they want, this is the feature they want. And the crazy thing about it is in the past you had to really get that ICP real tight and the feature set real tight because it cost so much money to go back and have to make tweaks and changes and to get it to market in the first place. Now, you can get a new feature added in the afternoon. It allows you to go to market a little bit faster. You don’t have to have the ideal feature set. You don’t have to have the ICP figured out. You get out there, you learn, and then you’re able to iterate a lot faster because the cost of development is super cheap now, and the speed in which like new features can be added or deprecated is a lot faster. So it allows you to go to market a lot faster than in the past. Okay, I got it. You can do this, you can code. What do you recommend for someone who’s starting out? You mentioned Lovable, Bolt, and then Cursor. Is Cursor like an advanced product? Cursor’s a little bit more advanced, but if you want to build production-ready software, it's something you're going to eventually have to use. But can you convert from Lovable to Cursor? Yes, you can. Yep. So what you typically do — and I still do this to this day — is every time I launch a product, I build it in Bolt first. You could use Bolt or Lovable, either one's fine. I use Bolt because Bolt came out first, and that's what I started using. Then Lovable came out like a month later. But I use Bolt. I’ll spin up the idea in Bolt. And the reason I like doing it in Bolt or Lovable is that it's really good at doing two things. It's really good at quickly launching your initial feature set, and then spinning up your backend. Your database — it's really good at that. So I start off in Bolt, then I connect it to a repository. For those who aren't familiar with GitHub, there's a button in Bolt or Lovable where you can easily connect it to a GitHub repository. So then once I kind of get the app to a point where the basic skeleton is set, then I go into Cursor. Then I pull the repository into Cursor and do the heavy work. The reason Cursor has a learning curve is because there are still some traditional developer things you need to know to spin up a project. Your initial database — it's a lot harder to spin up your initial database and backend in Cursor. It's also harder to identify your initial libraries and all of those things. If you're a developer, it's not difficult. But if you're new, it is. Bolt and Lovable abstract those things out for you. So you start it off in Bolt or Lovable. Basically, since they're limited in their context windows, when you're trying to build something complex, eventually they start making a whole bunch of errors. They basically start getting stup*d. That's when you know it's time to move to Cursor, because Cursor can handle the heavy lifting. So if you build in Bolt or Lovable until it gets stup*d, then you move to Cursor for the heavy lifting. And then is there a point where Cursor gets stup*d as well? No. Cursor has a couple of different things that allow it to extend its context window, which is his memory. You can put documentation into Cursor. For example, whatever your PRD prompt was, you can save that as a document in Cursor. You can also set rules. One of my rules in Cursor is: I'm not technical, so explain everything in layman's terms. And then as you’re starting to build code, you can save that code or you can point it to that repository. So there's some more flexibility with Cursor as far as managing your context window.Share on X But with Bolt and Lovable, the context window is more limited right now. So I start off in those, and then once I kind of get the skeleton up, then I move to Cursor. And at that point, a lot of the complicated things like spinning up your dev environment and all those things are kind of abstracted out. Then you can just jump in and use it the same way you use Bolt and Lovable. Fantastic. Fantastic. So, Jason, super helpful information for domain experts who want to build an application that will help them promote their product or manifest their ideas in product form. I think that’s super powerful. So if someone would like to learn about SoundStrategist and what SoundStrategist can do for them in terms of learning and experiential products, incorporating music, or building curriculum, or they would just like to connect with you to learn more about what you can do for them, where should they go? Jason William Johnson, PhD, on LinkedIn, or www.getsoundstrategies.com. Okay. Well, Jason William Johnson, you are really ahead of the curve, especially connecting this whole idea of vibe coding to people who are subject matter experts and not technical. And you know it because you don't come from a technical background, yet you've mastered it. I’m living it. Everything I’m sharing—this is not like a theoretical framework. I'm living all of this. So everything I’m saying. Super authentic. And especially coming from you—you understand what it's like to not be technical person, learning this, applying this. So if you'd like to do this, learn more, or maybe have Jason guide you, reach out to him. You can find him on LinkedIn at Jason William Johnson, PhD, or visit www.getsoundstrategies.com. And if you enjoyed this episode, make sure you follow us and subscribe on YouTube, follow us on LinkedIn, and on Apple Podcasts. Because every week I bring a super interesting entrepreneur, subject matter expert, or a combination of the two—like Jason—to the show, who will help you accelerate your journey with frameworks and AI frameworks in that gear. So thank you for coming, Jason, and thank you for listening. Important Links: Jason's LinkedIn Jason's website
The haters against Australia Day are losing, the Liberal Party is in a mess as a leadership spill looms, and catastrophic snow storms in China and the U.S. go against dud climate predictions. See omnystudio.com/listener for privacy information.
This weeks show starts with Sami's review of the 2026 Toyota Sienna minivan, which is apparently inspired by Japan's iconic bullet trains. Although our hosts struggle to see the direct connection, their discussion of the minivan covers all kinds of topics, ranging from the importance of max cargo room in a van, to whether shared media experiences are still valuable during a roadtrip. Then the guys talk about a few important news topics that came up, including the death of the Dodge Hornet, the arrival and cancellation of the new Chevy Bolt, and the arrival of new Chinese EVs on Canadian roads. Finally the show wraps up with an important reader question. We hope you enjoyed listening this episode as much as we loved recording it!
This is episode 463 of the Mobile Tech Podcast with guest Emily Forlini of PCMag -- brought to you by Mint Mobile. In this week's show, we dive into what it means for Canada to be getting Chinese EVs and discuss related topics including the Volvo EX60 and AI in cars. We then cover phone news, leaks, and rumors from OnePlus, Honor, RedMagic, ASUS, and NexPhone... Good times!Episode Links- Support the podcast on Patreon: https://www.patreon.com/tnkgrl- Donate / buy me a coffee (PayPal): https://tnkgrl.com/tnkgrl/- Support the podcast with Mint Mobile: https://mintmobile.com/mobiletech- Emily Forlini: https://www.threads.com/@emily_forlini- Canada is getting Chinese EVs: https://insideevs.com/news/784657/china-ev-tariff-canada/- Chevy to end production of new Bolt after 18 months: https://insideevs.com/news/785214/2027-chevrolet-bolt-limited-run/- Tesla removes Autopilot from new vehicles: https://insideevs.com/news/785225/tesla-removes-autopilot-base-models/- Kia EV9 GT not coming to the US: https://insideevs.com/news/779278/kia-ev9-gt-postponed-indefinitely/- Volvo EX60: https://www.pcmag.com/news/volvo-ex60-gets-a-nacs-port-400-mile-range-google-gemini-ai- Apple picks Google's Gemini for AI: https://www.pcmag.com/news/this-week-in-ai-apple-may-have-dated-openai-but-its-marrying-google- What's the deal with Physical AI?: https://www.pcmag.com/news/week-in-ai-physical-ai-vaporware-chatgpt-health-grok-gets-inappropriate- OnePlus drama: https://www.gsmarena.com/oneplus_flatly_denies_rumors_about_its_shutdown-news-71196.php- Honor Magic V6 and Robot Phone coming at MWC: https://www.gsmarena.com/honor_sets_mwc_event_confirms_magic_v6_and_robot_phone_official_debut_-news-71215.php- Honor Magic8 Pro Air: https://www.gsmarena.com/honor_magic8_pro_air_arrives_with_63_amoled_triple_camera_setup_and_5500mah_battery_-news-71165.php- RedMagic 11 Air:
Use promo code BOLTBROS on Sleeper and get 100% match up to $100! https://Sleeper.com/promo/BOLTBROS. Terms and conditions apply. #SleeperThe Los Angeles Chargers are making HUGE moves this offseason, and we're breaking it all down!In this episode of the Bolt Bros Podcast, we dive into the biggest news shaking up Charger Nation this week:
Hello, and welcome to the Reloading Podcast here on the Firearms Radio Network. Tonight the gang is talking with Jeff Siewert from Bulletology LLC. Jeff's history in reloading and Ballistics What is Bulletology LLC? What resources do you offer for people to read/watch? Is there one thing you've learned on your journey that just blew your primers right out of their pockets? Where is a good starting point for someone looking to go from being just a reloader, to a “handloader”? Cartridge case Design of the case Quality of the case Case inspection Case preparation Sizing of the case Priming Case rim Thickness Priming systems Primers Primer seating depth Powder Selection Difference in powders Charge weights (accuracy) Why some powders are more accurate Bullet Different types What are more accurate What makes a bullet accurate Seating of the bullet Crimping or not Taper crimping Pressures How is a handloader to test for pressure Primer condition Case head expansion Bolt swipe/extractor swipe Effects on accuracy Cartridge corner: Suicide hotline 988 or 800-273-8255 https://walkthetalkamerica.org/ For Active Military or veterans, www.militaryonesource.com Reviews: Reloading Podcast Merch link Please remember to use the affiliate links for Amazon and Brownells from the Webpage it really does help the show and the network. Also visit https://huntshootoffroad.com/shop/ and use code RLP10 to save 10%on your Brass Goblin gear. Patreons New Patreons: Current Patreons: Aaron R, AJ, Alexander R, Anthony B, Mr. Anonymoose, bt213456, Bill N, Brian M, Carl K, Chris S, KC3FHH, Ryan J, D MAC, David S, Drew, Eric S, Fatelvis111 Gerrid M, Jack B, Jason R, Jim M, Joel L, John C, Kalroy, Jason R. Joseph B, Brewer Bill, Larry C, Lonnie K, Mark H, Mark K, Vic T., Matthew T, David D, michael sp, Mike St, Mitchell N, Nick M, Nick R, N7FFL, Paul N, Peter D, Richard C, Riley S, Robert F, Russ H, Socal Reloader RP, T-Rex, Tony S, Winfred C RLP pledge link Thank you for listening. How to get in contact with us: Google Voice # 608-467-0308 Reloading Podcast website. Reloading Podcast Facebook Reloading Podcast on Instagram Reloading Podcast on MeWe Reloading Podcast on Discord The Reloading Room Buckeye Targets
The Prime Minister denies placing blame on Scott Morrison for the current level of antisemitism, and the opposition leader struggles to keep the Coalition together after rebel senators vote against new hate speech laws. See omnystudio.com/listener for privacy information.
The Greens disgrace themselves in Parliament. A National Party senator talks about the minor changes to Labor's hate speech laws, and a retired American general delves into Trump's threats to take Greenland. See omnystudio.com/listener for privacy information.
Follow Tom: https://www.instagram.com/watchguru_/ See More of Tom's Crazy Watch Collection: https://www.watchguru.com/ Watch guru Tom Bolt joins Rob to talk all things luxury - from a MILLION-POUND watch to what he really thinks of the brands that score fortunes around the world. Tom also talks frankly to Rob about what's really happening when it comes to watch crime, and why hard work isn't always enough to achieve success in the world of today. Is it true that those most deserving of success don't always see the pay off? It's a sometimes fiery conversation between two maestros of success - who will come out on top...? BEST MOMENTS "There are so many people who deserve more than they get" "It's not watches causing crime - it's the world we live in" "Ultimately it's about how we feel as an individual. The only thing I want on my deathbed is to not feel "I wish I'd done this and that" Exclusive community & resources: For more EXCLUSIVE & unfiltered content to make, manage & multiply more money, join our private online education platform: Money.School → https://money.school And if you'd like to meet 7 & 8 figure entrepreneurs, & scale to 6, 7 or 8 figures in your business or personal income, join us at our in-person Money Maker Summit Event (including EXCLUSIVE millionaire guests/masterminds sessions) → https://robmoore.live/mms
Why you should listenScott Stafford shares his human-centric AI framework, explaining why major tech companies are now hiring salespeople again despite investing heavily in agentic solutions.Learn how to shift 20% of your workload to voice mode and natural language, with practical examples of completing real work while away from your desk.Discover where vibe coding tools like Bolt.new hit their limits and what foundational knowledge you still need to deliver AI-first solutions for clients.Wondering where to place your bets as AI reshapes the entire SaaS landscape? In this episode, I talk with Scott Stafford, an AI strategist working at the intersection of Salesforce, robotics, and human-centric technology. We dig into why companies that rushed to replace humans with AI are now reversing course, and what that shift means for consultants building practices today. Scott also shares his vision for how personal agents will become your new user interface, making traditional software interactions feel like relics. If you're trying to figure out where to invest your learning time and how to stay relevant as voice interfaces take over, this conversation maps out the territory ahead.About Scott Stafford Scott Stafford is a Human-Centric Technologist, AI Strategy Lead, and technology evangelist with 20+ years of experience bridging business strategy and emerging technology. A 27-time Salesforce certified All-Star Ranger and community leader, Scott focuses on helping people and organizations adapt to the rapidly evolving landscape of AI and the Fourth Industrial Revolution. He is co-founder of RiseWithVoice, an initiative that helps individuals strengthen both their literal and metaphorical voice to thrive in an AI-augmented future.Resources and LinksScottstafford.aiScott's LinkedIn profile625 - The Salesforce Partner's AI Dilemma with Sanjeet MahajanBolt.newSuperwhisperPrevious episode: 659 - How This Salesforce Partner Grew to 25 People by Saying No with Dennis KnodtCheck out more episodes of The Paul Higgins PodcastJoin our newsletter
У свіжому дайджесті DOU News обговорюємо гучне призначення Михайла Федорова міністром оборони та свіжий звіт про зарплати айтівців зими 2026. Також у випуску: чому MacPaw закриває Setapp Mobile, як таксі працюватимуть у комендантську годину та великі скорочення в Playtika. Дивіться ці та інші новини українського IT та глобального тек-сектору. Таймкоди 00:00 Інтро 00:24 Верховна Рада призначила Михайла Федорова новим міністром оборони 01:35 Зарплати українських розробників — зима 2026 05:40 Зарплати українських тестувальників — зима 2026 07:03 MacPaw закриє магазин застосунків Setapp Mobile у лютому 2026 09:01 Bolt, Uklon та Uber просять дозволу на роботу в Києві під час комендантської години 10:03 Playtika скорочує близько 15% співробітників у першому кварталі 10:53 Apple Siri працюватиме на базі Google Gemini 15:11 Іран вперше повністю заблокував Starlink Ілона Маска 17:09 Маск заперечує обізнаність про те, що генерує Grok AI 20:17 Claude випустили Cowork: новий ШІ-інструмент для автоматизації задач 24:00 Anthropic вдвічі знижує прогнози продуктивності ШІ 26:52 «It's over»: Лінус Торвальдс випробував вайб-кодинг 29:09 Користувачі ChatGPT невдовзі побачать таргетовану рекламу 31:34 Що цього тижня рекомендує Женя: статтю та Time Travel Map
The shock polls today show One Nation soaring and the big two parties falling, Russia is still taking massive losses in Ukraine after nearly four years of war. Plus, why have young women suddenly become so left wing?See omnystudio.com/listener for privacy information.
It's EV News Briefly for Monday 12 January 2026, everything you need to know in less than 5 minutes if you haven't got time for the full show.Patreon supporters fund this show, get the episodes ad free, as soon as they're ready and are part of the EV News Daily Community. You can be like them by clicking here: https://www.patreon.com/EVNewsDailyBMW READIES QUAD-MOTOR M3 EV FOR 2027 https://evne.ws/4qeQYPA BOLT RETURNS AS AMERICA'S CHEAPEST EV https://evne.ws/45UfHjY CALIFORNIA MOVES TO PLUG FEDERAL EV INCENTIVE GAP https://evne.ws/3NJTjDF DACIA BETS ON TWO MINICARS TO FIX CO2 GAP https://evne.ws/4qKHaga FAST CHARGING, NOT MILEAGE, HURTS EV BATTERIES MOST https://evne.ws/45UgvFw LEAPMOTOR USES BRUSSELS LAUNCH TO PRESS EUROPE BET https://evne.ws/459dnp3 MAZDA TAPS CHINESE TECH FOR NEW EUROPEAN EV https://evne.ws/3NkvEd1 MERCEDES OPENS HIGH-SPEED, MIXED-STANDARD CHARGERS IN B.C. https://evne.ws/49kYD97 VOLVO SHRINKS ELECTRIC SEMI FOR CITY STREETS https://evne.ws/3LKDufq LUCID FORCED INTO SOFTWARE CLIMBDOWN AFTER VIRAL REVIEW https://evne.ws/4jJ4iJF
Can you help me make more podcasts? Consider supporting me on Patreon as the service is 100% funded by you: https://EVne.ws/patreon You can read all the latest news on the blog here: https://EVne.ws/blog Subscribe for free and listen to the podcast on audio platforms:➤ Apple: https://EVne.ws/apple➤ YouTube Music: https://EVne.ws/youtubemusic➤ Spotify: https://EVne.ws/spotify➤ TuneIn: https://EVne.ws/tunein➤ iHeart: https://EVne.ws/iheart BMW READIES QUAD-MOTOR M3 EV FOR 2027 https://evne.ws/4qeQYPA BOLT RETURNS AS AMERICA'S CHEAPEST EV https://evne.ws/45UfHjY CALIFORNIA MOVES TO PLUG FEDERAL EV INCENTIVE GAP https://evne.ws/3NJTjDF DACIA BETS ON TWO MINICARS TO FIX CO2 GAP https://evne.ws/4qKHaga FAST CHARGING, NOT MILEAGE, HURTS EV BATTERIES MOST https://evne.ws/45UgvFw LEAPMOTOR USES BRUSSELS LAUNCH TO PRESS EUROPE BET https://evne.ws/459dnp3 MAZDA TAPS CHINESE TECH FOR NEW EUROPEAN EV https://evne.ws/3NkvEd1 MERCEDES OPENS HIGH-SPEED, MIXED-STANDARD CHARGERS IN B.C. https://evne.ws/49kYD97 VOLVO SHRINKS ELECTRIC SEMI FOR CITY STREETS https://evne.ws/3LKDufq LUCID FORCED INTO SOFTWARE CLIMBDOWN AFTER VIRAL REVIEW https://evne.ws/4jJ4iJF
Wartime bomber pilot, champion jockey, racing journalist, bestselling novelist, Dick Francis truly was a legend. The Slightly Foxed team join Dick's son Felix and renowned racing commentator Derek Thompson (‘Tommo' to his fans) to talk about the modest man who left school at 15 but went on to write thrillers set in the world of racing that have sold more than 60 million copies in 35 languages.Dick grew up with horses and riding was in his blood, though he didn't become a professional jockey until he was 26, an age when many jockeys are retiring. But he quickly became one of the most successful National Hunt jockeys (and Champion Jockey in 1953–4), riding winners for top owners including the Queen and Queen Elizabeth the Queen Mother. And it was the spectacular collapse of the Queen Mother's horse Devon Loch beneath him on the point of winning the Grand National in 1956 that finally persuaded Dick to retire from racing and begin a new career, first as a journalist and then as a writer of endlessly inventive crime fiction.So how did he do it? The novels, with their evocative titles – Dead Cert, Decider, Bolt, Hot Money – take you straight into the world of old-fashioned racing with its toffs and touts and inevitable shady characters. According to Felix, the writing of them was always a partnership, with Dick, a born storyteller, producing the plots and the atmosphere and his wife Mary as brilliant researcher and editor. Felix, too, helped with writing and research, and after Dick's death in 2010 he was persuaded by Dick's literary agent to keep the Francis ‘brand' alive. He is now the author of 19 bestselling ‘Dick Francis' novels, bringing the racing scene up to date with a female jockey as the heroine of his latest, Dark Horse.Along with Dick Francis's story of talent, courage and sheer determination – one he told himself in his autobiography The Sport of Queens – the team enjoyed added anecdotes and insights into the world of racing from ‘Tommo', and an ending that had us on the edge of our seats.
This week Joe Fier sits down with renowned coach, entrepreneur, and AI enthusiast Krista Mashore. The conversation dives into how Krista transitioned from a background in education and real estate to building a thriving coaching business, leveraging cutting-edge AI tools and innovative event models. Discover practical insights for business growth, creating interactive lead magnets, and redefining the sales process—all while keeping things fun and approachable. Whether you're an entrepreneur looking to scale, a coach eager to boost engagement, or simply curious about using AI to build your brand, this episode delivers actionable strategies and inspiring stories. Topics Discussed Krista's Journey: From teaching to top 1% real estate agent, then launching her own coaching and consulting business. Monthly Virtual Events: The proven process Krista uses for lead generation, qualifying buyers, and selling high-ticket offers through repeatable online events. AI in Business: How Krista integrates AI tools (like custom bots and mind clones) to create interactive experiences, streamline operations, and personalize content. Building Lead Magnets: Why interactive lead magnets (quizzes and custom apps) outperform static downloads in today's market. Optimizing Offers & Event Strategies: Krista's process for continuously refining event offers, bonuses, and pricing to maximize conversions. Tools for Non-Techies: How platforms like Abacus AI, Lovable, Bolt, and Wispr Flow enable anyone (even with zero coding experience!) to build apps and automation by just talking to the computer. Audience Evolution: Challenges and lessons learned as Krista expands her coaching programs beyond real estate into entrepreneurship and AI. The Human Side of AI: Why scaling with AI enhances, rather than replaces, real human connection and accountability in coaching programs. Removing Barriers to Success: Strategies for helping clients build belief in themselves, overcome objections, and achieve their goals with support and accountability. Resources Mentioned Abacus AI: https://abacus.ai Lovable: https://lovable.dev Bolt: https://bolt.new Wispr Flow: https://wisprflow.ai Delphi Mind Clone: https://hustleandflowchart.com/delphi Connect with Krista (don't forget to DM here "BOT" for access to her constraint bot) Website: https://constraint.kristamashore.com YouTube: https://www.youtube.com/@KristaMashoreCoachingInstagram: https://www.instagram.com/kristamashore Facebook: https://www.facebook.com/kristamashore/...
Year of the Bolt… and it ends with a thud. The Chargers get run over by the Patriots and we break down what went wrong, who deserves blame, and where this team goes from here. Plus, Matthew Stafford does Matthew Stafford things — leading the Rams on a clutch game-winning drive. We react to the finish, the big throws, and what it means for the Rams moving forward.Support the show: http://kaplanandcrew.com/See omnystudio.com/listener for privacy information.
Jason Lemkin is the founder of SaaStr, the world's largest community for software founders, and a veteran SaaS investor who has deployed over $200 million into B2B startups. After his last salesperson quit, Jason made a radical decision: replace his entire go-to-market team with AI agents. What started as an experiment has transformed into a new operating model, where 20 AI agents managed by just 1.2 humans now do the work previously handled by a team of 10 SDRs and AEs. In this conversation, Jason shares his hands-on experience implementing AI to run his sales org, including what works, what doesn't, and how the GTM landscape is quickly being transformed.We discuss:1. How AI is fundamentally changing the sales function2. Why most SDRs and BDRs will be “extinct” within a year3. What Jason is observing across his portfolio about AI adoption in GTM4. How to become “hyper-employable” in the age of AI5. The specific AI tools and tactics he's using that have been working best6. Practical frameworks for integrating AI into your sales motion without losing what works7. Jason's 2026 predictions on where SaaS and GTM are heading next—Brought to you by:DX—The developer intelligence platform designed by leading researchersVercel—Your collaborative AI assistant to design, iterate, and scale full-stack applications for the webDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Transcript: https://www.lennysnewsletter.com/p/we-replaced-our-sales-team-with-20-ai-agents—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/182902716/my-biggest-takeaways-from-this-conversation—Where to find Jason Lemkin:• X: https://x.com/jasonlk• LinkedIn: https://www.linkedin.com/in/jasonmlemkin• Website: https://www.saastr.com• Substack: https://substack.com/@cloud—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jason Lemkin(04:36) What SaaStr does(07:13) AI's impact on sales teams(10:11) How SaaStr's AI agents work and their performance(14:18) How go-to-market is changing in the AI era(19:19) The future of SDRs, BDRs, and AEs in sales(22:03) Why leadership roles are safe(23:43) How to be in the 20% who thrive in the AI sales future(28:40) Why you shouldn't build your own AI tools(30:10) Specific AI agents and their applications(36:40) Challenges and learnings in AI deployment(42:11) Making AI-generated emails good (not just acceptable)(47:31) When humans still beat AI in sales(52:39) An overview of SaaStr's org(53:50) The role of human oversight in AI operations(58:37) Advice for salespeople and founders in the AI era(01:05:40) Forward-deployed engineers(01:08:08) What's changing and what's staying the same in sales(01:16:21) Why AI is creating more work, not less(01:19:32) Why Jason says these are magical times(01:25:25) The "incognito mode test" for finding AI opportunities(01:27:19) The impact of AI on jobs(01:30:18) Lightning round and final thoughts—Referenced:• Building a world-class sales org | Jason Lemkin (SaaStr): https://www.lennysnewsletter.com/p/building-a-world-class-sales-org• SaaStr Annual: https://www.saastrannual.com• Delphi: https://www.delphi.ai/saastr/talk• Amelia Lerutte on LinkedIn: https://www.linkedin.com/in/amelialerutte/• Vercel: https://vercel.com• What world-class GTM looks like in 2026 | Jeanne DeWitt Grosser (Vercel, Stripe, Google): https://www.lennysnewsletter.com/p/what-the-best-gtm-teams-do-differently• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Replit: https://replit.com• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• ElevenLabs: https://elevenlabs.io• The exact AI playbook (using MCPs, custom GPTs, Granola) that saved ElevenLabs $100k+ and helps them ship daily | Luke Harries (Head of Growth): https://www.lennysnewsletter.com/p/the-ai-marketing-stack• Bolt: https://bolt.new• Lovable: https://lovable.dev• Harvey: https://www.harvey.ai• Samsara: https://www.samsara.com/products/platform/ai-samsara-intelligence• UiPath: https://www.uipath.com• Denise Dresser on LinkedIn: https://www.linkedin.com/in/denisedresser• Agentforce: https://www.salesforce.com/form/agentforce• SaaStr's AI Agent Playbook: https://saastr.ai/agents• Brian Halligan on LinkedIn: https://www.linkedin.com/in/brianhalligan• Brian Halligan's AI: https://www.delphi.ai/minds/bhalligan• Sierra: https://sierra.ai• Fin: https://fin.ai• Deccan: https://www.deccan.ai• Artisan: https://www.artisan.co• Qualified: https://www.qualified.com• Claude: https://claude.ai• HubSpot: https://www.hubspot.com• Gamma: https://gamma.app• Sam Blond on LinkedIn: https://www.linkedin.com/in/sam-blond-791026b• Brex: https://www.brex.com• Outreach: https://www.outreach.io• Gong: https://www.gong.io• Salesloft: https://www.salesloft.com• Mixmax: https://www.mixmax.com• “Sell the alpha, not the feature”: The enterprise sales playbook for $1M to $10M ARR | Jen Abel: https://www.lennysnewsletter.com/p/the-enterprise-sales-playbook-1m-to-10m-arr• Clay: https://www.clay.com• Owner: https://www.owner.com• Momentum: https://www.momentum.io• Attention: https://www.attention.com• Granola: https://www.granola.ai• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• Palantir: https://www.palantir.com• Databricks: https://www.databricks.com• Garry Tan on LinkedIn: https://www.linkedin.com/in/garrytan• Rippling: https://www.rippling.com• Cursor: https://cursor.com• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• The new AI growth playbook for 2026: How Lovable hit $200M ARR in one year | Elena Verna (Head of Growth): https://www.lennysnewsletter.com/p/the-new-ai-growth-playbook-for-2026-elena-verna• Pluribus on AppleTV+: https://tv.apple.com/us/show/pluribus/umc.cmc.37axgovs2yozlyh3c2cmwzlza• Sora: https://openai.com/sora• Reve: https://app.reve.com• Everything That Breaks on the Way to $1B ARR, with Mailchimp Co-Founder Ben Chestnut: https://www.saastr.com/everything-that-breaks-on-the-way-to-1b-arr-with-mailchimp-co-founder-ben-chestnut/• The Revenue Playbook: Rippling's Top 3 Growth Tactics at Scale, with Rippling CRO Matt Plank: https://www.youtube.com/watch?v=h3eYtzBpjRw• 10 contrarian leadership truths every leader needs to hear | Matt MacInnis (Rippling): https://www.lennysnewsletter.com/p/10-contrarian-leadership-truths—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com