POPULARITY
Categories
When President Trump shared a racist video on his Truth Social account last week, the blowback was real. But the video is also part of a tradition that has existed in the U.S. since the early 1800s — of using "humor" to spread and crystallize racist ideals. On this episode, we speak with Raul Perez, the author of "The Souls of White Jokes: How Racist Humor Fuels White Supremacy," who tells us how making fun of Black people was crucial to constructing "whiteness" — and perpetuating white supremacy — in the early days of the U.S.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
At first, nothing about the relationship seemed unusual. A year and a half together. Shared apartments. Time apart for work. But early on, she noticed the scratches — deep, unexplained marks that appeared on his body overnight, long before she entered his life. He said it had always been that way.As their relationship deepened, strange things began happening around her — but only when he was present. Shadows in mirrors. Doors opening on their own. Objects moving with deliberate precision. When he worked out of town, everything stopped.What began as coincidence slowly formed a pattern too consistent to ignore. And when an unexplained figure appeared beside her child's bed — seen by him alone — the question shifted from fear to something more disturbing.Was something following him? Or was he the doorway?#TrueGhostStories #ParanormalExperience #SomethingFollowedHim #HauntedRelationship #ShadowFigures #UnexplainedScratches #RealGhostStory #ParanormalPodcast #AfterMidnight #UnseenPresenceLove real ghost stories? Want even more?Become a supporter and unlock exclusive extras, ad-free episodes, and advanced access:
What if the real secret to building leaders, not followers, was simply honesty paired with care? Nikki sits down with Steve Sorenson, Senior Director Learning and Culture at Johnsonville, to explore a leadership model rooted in humble candor, empowerment, and intentional coaching. Known for replacing "managers" with "coaches" and fostering autonomy through frameworks like the ABC Decision Method and the Simon Principle, Steve walks the talk when it comes to building trust, elevating teams, and modeling people-first leadership. This conversation is packed with tangible tools, soul-shifting moments, and the kind of clarity that challenges old norms and makes culture come alive. Whether you're trying to turn the ship or sharpen your edge, this one will shift how you lead. Additional Resources: Connect with Steve on LinkedIn Learn more about Johnsonville Watch Gut + Science (and more) on YouTube! Connect with Nikki on LinkedIn Follow PeopleForward Network on LinkedIn Learn more about PeopleForward Network Nikki's Key Takeaways: Empowerment grows when autonomy and clarity align. Humble candor blends honesty with deep care. Coaches build leaders, not followers. Shared language strengthens trust and culture. Model the behavior you want repeated.
Have you ever picked up your phone and discovered a hundred unread text messages?Likely you're at the tail end of a group chat, and it kind of feels like walking into a party two hours late. You want to respond but wonder: will people be annoyed at you for backtracking? Maybe you even feel resentful for being added to this chat without consent. Group chats can sometimes feel like you're operating with one arm behind your back – but if you suffer group chat anxiety, I think it's important to remember that it's because you really care about your friendships.This episode is the first segment of a two-part series on anxiety surrounding group chats: today, we're talking about why group chats can be tricky to navigate, and in the next, what you can actually do about it.In this episode you'll hear about:Why group chats are often used to satisfy a craving for connection but often fall short of delivering real connection people wantThe richness of in-person connection (Body language! Tone! Shared moments!) vs. the flatness of textingDifferent ways group chats can cause anxiety, from the public nature of having something you said responded to or ignored to lurker guiltThe intensity overload of group chats, plus, different studies about texting and anxietyResources & LinksListen to Episode 12 and learn about my theory about the Roots framework; Episode 100 about the Wheel of Connection; Episode 127 about using data to manage your friendship mental load; Episode 131 about the spectrum of digital connection; and Episode 134 about fringe friends.Like what you hear? Visit my website, leave me a voicemail, and follow me on Instagram and TikTok!Want to take this conversation a step further? Send this episode to a friend. Tell them you found it interesting and use what we just talked about as a conversation starter the next time you and your friend hang out!
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
Contact me! Send me a text message here!This week on the show we talk about the Save Act, Bondi's horrible showing in Congress, the Canadian trans shooter, and the importance of a shared language!Support the showIf you love the show, share it with friends and family!
Roughly 65% of U.S. farmers now have access to fewer dealerships than they did five years ago. For major brands like John Deere, over 80% of authorized locations are now part of large chains (owning 7+ stores). That doesn't mean that they compromise on service of commitment to their communities. Johnson Tractor announced this week their acquisition of Farmers’ Implement, effective June 1. This will expand Johnson Tractor’s presence and support for agricultural customers across the region. Through this transition, Farmers’ Implement customers will gain access to Johnson Tractor’s broader parts network, service resources, and technical expertise, while continuing to work with the local teams they know and trust. Pam Jahnke found out about the plan ahead with Eric Reuterskiold, CEO of Johnson Tractor. He's been a part of the company for more than three decades, and has worked his way up the ranks.See omnystudio.com/listener for privacy information.
Bishop Brian Mascord says we're challenged this Lent to move beyond comfort zones and truly embody compassion by getting close enough to the broken and ignored to experience shared pain, tables, and grace where healing begins
Latest News/Headlines | Traffic | Weather | Sports Topic I: The Second Condition of Allegiance (Bai'at): A Framework for Moral Living Topic II: Fasting in Different Religions: A Shared Spiritual Discipline Presenter(s): Imam Jalees Khan Bilal mahmood Guest(s): * Imam Bilal Ahmed Qamar * Professor Michael Reiss * DR Leila Dehghan Zaklaki * Imam Sabahat Karim * Imam Mansoor Ahmed Mubashir * DR Mona khurrana Producer(s): Areeba Noor & Assistant Trainee Producer: Rubab Zafar Lead Producer: Tayyaba Tahir Researcher(s): Maimuna Hydara, Tooba Daud, Fateha Iqbal, Attiya tul Subuh, Basma Qamar & Hibba Tul Baseer
Today we talk about the book After Virtue by Alasdair MacIntyre. We talk about his genealogy of moral discourse. The teleologies of Aristotle. The failure of the Enlightenment moral project. Our modern culture of Emotivism and the sorts of characters that thrive in it. Shared practices and community as a way to revitalize moral conversation. Hope you love it! :) Sponsors: Nord VPN: https://nordvpn.com/philothis Thank you so much for listening! Could never do this without your help. Website: https://www.philosophizethis.org/ Patreon: https://www.patreon.com/philosophizethis Social: Instagram: https://www.instagram.com/philosophizethispodcast X: https://twitter.com/iamstephenwest Facebook: https://www.facebook.com/philosophizethisshow Learn more about your ad choices. Visit podcastchoices.com/adchoices
When Listeners Say, “Me Too”: Finding Familiarity in Shared Stories – A Listener Voicemail Episode Description:In this special listener voicemail episode, Jen and Amy turn the mic outward—listening closely to the voices, stories, and wisdom of the community that makes this show what it is. From reflections sparked by our Wake Up Call season to deeply personal responses to Jen's book Awake, these messages trace a powerful throughline: what happens when we begin to tell the truth about our lives—and make space for who we're becoming. Listeners share how conversations with Lee C. Camp, John Fugelsang, Melani Sanders, and Chrissy King stirred something awake in them, naming long-held questions around faith, body, identity, and courage. Others call in to reflect on the uncanny resonance of Awake, beginning again and again with the same line: “Jen, our stories are very similar.” This episode is tender, funny, and honest—a reminder that none of us are doing this work alone. It's about waking up, letting go, finding language for the ache, and choosing what comes next—together. If you've ever wondered whether your voice matters here, this episode is your answer. Thought-provoking Quotes: “Our stories are very similar—and hearing that out loud made me realize I'm not behind. I'm just in it.” – FTL Listener “I didn't know how much I needed someone to say, ‘You're allowed to change your mind,' until this season.” – FTL Listener “That episode felt like someone finally put words to the questions I've been carrying quietly.” – FTL Listener “Something in me relaxed when I heard someone else say it first.” – FTL Listener Resources Mentioned in This Episode: Beyond Words: Listening to a Hidden Community — Ky Dickens and The Telepathy Tapes - https://jenhatmaker.com/podcasts/series-64/unlocking-the-secrets-of-consciousness-and-telepathy-ky-dickens-and-the-telepathy-tapes/ The Telepathy Tapes podcast - https://thetelepathytapes.com/ Rick Rubin - https://x.com/RickRubin Elizabeth Gilbert - https://www.elizabethgilbert.com/ Human Flourishing in a Distracted World: Theologian Lee C. Camp Offers a Wake Up Call To Living Well - https://jenhatmaker.com/podcasts/series-64/human-flourishing-in-a-distracted-world-theologian-lee-c-camp-offers-a-wake-up-call-to-living-well/ Love Over Dominance: John Fugelsang on the Future of Christianity - https://jenhatmaker.com/podcasts/series-64/love-over-dominance-john-fugelsang-on-the-future-of-christianity/ Social Media Sensation Melani Sanders Reminds Us That We Are Enough and We Do Not Care - https://jenhatmaker.com/podcasts/series-64/social-media-sensation-melani-sanders-reminds-us-that-we-are-enough-and-we-do-not-care/ Wake Up Call: Your Body Was Never the Problem with Body Liberation Advocate, Chrissy King - https://jenhatmaker.com/podcasts/series-64/wake-up-call-your-body-was-never-the-problem-with-body-liberation-advocate-chrissy-king/ Dr. Mary Claire Haver - https://www.instagram.com/drmaryclaire Jen Hatmaker Book Club - https://shop.jenhatmaker.com/collections/book-club Connect with Jen!Jen's Website - https://jenhatmaker.com/ Jen's Instagram - https://instagram.com/jenhatmakerJen's Twitter - https://twitter.com/jenHatmaker/ Jen's Facebook - https://facebook.com/jenhatmakerJen's YouTube - https://www.youtube.com/user/JenHatmaker The For the Love Podcast is presented by Audacy. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
In the spirit of Valentine's Day, we're talking about how to build a stronger financial partnership - from financial date nights and shared accounts to the subtle social norms that still influence modern couples. Today's guest is Dr. Emily Garbinsky, professor at Cornell's Johnson School of Business, whose work explores how couples make financial decisions, how pooling money affects relationship satisfaction, and what really happens when one partner earns more than the other. Learn more about Dr. Garbinsky's research here. Hosted on Acast. See acast.com/privacy for more information.
Clean, tidy, and organized didn't come easy to Dana K. White. So she decided to go public with it! Dana joins AllMomDoes host Julie Lyles Carr for an insightful conversation about decluttering and organizing for the messy among us, with great tips for easily getting your kids in on the decluttering game!Show Notes: https://bit.ly/46zOZNV Takeaways:Dana K. White is a decluttering expert and author.She emphasizes the importance of understanding one's clutter threshold.The container concept is about setting limits on what can be kept.Teaching kids to declutter can be fun and engaging.One-in-one-out rule helps maintain organization.Shared spaces should reflect the lowest clutter threshold.Gifts should be viewed as love, not just items.Winnie's Pile of Pillows introduces decluttering to children.Decluttering is a process that requires patience and practice.It's important to communicate the value of space to children.Sound Bites:"I teach decluttering for a living""Shared spaces need to be decluttered down""One in one out is how you maintain progress"Chapters:00:00 - Introduction and Technical Challenges01:13 - Meet Dana K. White: The Decluttering Expert02:29 - The Journey from Chaos to Clarity05:17 - Understanding the Container Concept12:31 - Teaching Kids to Declutter18:04 - Managing Clutter Thresholds in Shared Spaces24:00 - Navigating Gifts and Sentimentality25:06 - Introducing 'Winnie's Pile of Pillows'Keywords: decluttering, organization, parenting, home management, container concept, kids, gifts, clutter threshold, home, podcast
MIT and Stanford professor Alex "Sandy" Pentland, one of the most cited researchers in the world with over 165,000 citations, explains why the real AI advantage isn't smarter models but collective intelligence. It's smarter humans working together with AI as the connective tissue. Drawing from his latest book Shared Wisdom, Pentland reveals the frameworks behind community intelligence and why data ownership, not frontier AI, will determine who wins the next decade.You'll discover:✅ Why "people plus AI" consistently beats AI alone, and the hedge fund evidence that proves it✅ How "AI buddies" are replacing corporate manuals, newsletters, and hallway conversations to keep distributed teams aligned✅ The Deliberation.io tool that makes meetings more than twice as effective by neutralizing power dynamics and keeping groups focused✅ Why a 350,000-person multinational is cutting in-house staff to 150,000 while hiring 100,000 more project-based workers, and how AI enables that shift✅ How a doctor with zero technical background built a hospital operating system in 6 weeks using AI tools✅ The staggering stat: AI costs are dropping by 50% every 3.5 months, a factor of 1,000 over three years, and what that means for personal, on-device AI✅ Why China's Belt and Road and India's Citizen Stack (1.4 billion customers signed up) are quietly winning the global data game while Silicon Valley focuses on frontier models✅ Sandy's provocative proposal: a 10% equity contribution to sovereign wealth funds at company formation, which would have created a $10 trillion US fund if started in 1990⏱️ TIMESTAMPS0:00 Why AI alone loses money: the hedge fund reality check2:07 Shared wisdom, community intelligence, and organizational culture4:25 AI buddies: the brilliant librarian inside your company5:44 Deliberation.io: making meetings 2x more effective7:01 Using AI for exploration and long-range strategic thinking11:29 Who's to blame when AI fails: executives or the machine?14:28 Why AI can't do causality and what that means for leaders18:14 AI's killer app for remote work and distributed organizations21:09 A doctor built a hospital OS in 6 weeks: small teams, massive impact24:09 Job displacement, social safety nets, and the sovereign wealth fund idea27:01 Reinventing education: Costa Rica's bet and the MIT Media Lab model32:16 LLMs vs. older AI: why you need both (and the loyalagents.org initiative)37:13 Practical starting points for redesigning work with AI40:16 Misinformation, data provenance, and the billion-dollar North Korea problem48:50 The global data race: China, India, UAE, and why frontier models aren't the game54:00 Cybersecurity warning: agentic AI creates massive new attack surfaces
⏳ Why ‘We Already Shared Everything' Still Delays Divorce | Los Angeles Divorce Many couples believe that sharing bank statements and numbers between themselves is enough—but in California divorce, that's not the same as filing proper financial disclosures. In this video, we explain why financial disclosures are mandatory, how informal sharing doesn't satisfy court requirements, and why missing or incomplete disclosures can quietly block final approval. Divorce661 ensures financial disclosures are completed, exchanged, and filed correctly, so agreement doesn't turn into unexpected delays.
Series: The Spirit's Mission Through You - Acts 1-7Sermon: Paradise Lost, Restored, SharedScripture: Acts 5Pastor: David Giese
In this episode, the hosts engage with their community, discussing the upcoming 200th episode meetup and the vibrant Discord community. They share personal experiences from their delivery work, including challenges faced during inclement weather and the importance of safety. The conversation shifts to industry changes, including layoffs and the dynamics of union representation, reflecting on the impact of these changes on workers. The hosts emphasize the significance of community support and shared experiences in navigating the challenges of their profession. In this episode, the hosts discuss the launch of their new merch store, the benefits of their Patreon, and the realities of trucking life, including the often misleading nature of dispatch communications. They also share common grievances from PCMs and engage listeners with humorous 'would you rather' questions in their Doorstep Dilemmas segment. The episode wraps up with a call to action for community engagement and support. Support the Show Join the Discord Channel Takeaways Community engagement is vital for building connections. Personal experiences in delivery can be both humorous and challenging. Safety should always be a priority in delivery work. Industry changes can lead to uncertainty for workers. Union representation is crucial for worker rights. Weather conditions significantly impact delivery operations. Shared experiences among drivers foster a sense of camaraderie. Community support can make challenging days more manageable. Communication with fellow drivers can provide reassurance during tough times. The importance of adapting to changes in the industry. The hosts are launching a merch store with various products. Patreon subscribers receive exclusive content and voting power. Dispatch communications can often mislead drivers about quick stops. Common grievances at PCMs include unnecessary group discussions. Listeners enjoy engaging with 'would you rather' questions. Female drivers face unique challenges in a male-dominated industry. Community support is vital for the podcast's continuation. The hosts appreciate their Patreon subscribers and listeners. The importance of safety and proper footwear is emphasized. Listeners are encouraged to participate in Discord discussions. Chapters 00:00 Introduction and Community Engagement 03:13 Personal Experiences and Challenges in Delivery 05:58 Reflections on Safety and Decision Making 08:58 Industry Changes and Layoffs 11:59 Union Dynamics and Worker Rights 15:09 Impact of Weather on Delivery 17:57 Community Support and Shared Experiences 29:57 Merch Store Launch and Community Engagement 33:09 Patreon Benefits and Listener Interaction 34:05 Dispatch vs. Reality: The Truth Behind Quick Stops 35:58 Woes at PCMs: Common Grievances 39:08 Doorstep Dilemmas: Would You Rather Questions 51:58 Wrap-Up and Community Call to Action TOP RATE LEGENDS! Tony & Starla, thank you for the support! THE OPINIONS EXPRESSED OR VIEWS EXPRESSED ON THIS PODCAST ARE THOSE OF THE HOSTS AND GUESTS AND DO NOT NECESSARILY REFLECT ANY DELIVERY COMPANY
Gabby Osio Vanden and Jack Weisman join the show after winning Sundance's Grand Jury Prize to unpack the ten-year road behind Nuisance Bear, a polar bear's journey through two connected worlds: tourist-heavy Churchill, Manitoba, and the Inuit community of Arviat, where the stakes are far more complex and far less welcoming. The film becomes a meditation on coexistence, control, and who gets labeled a “nuisance” in a shared landscape.We dig into craft and access: finding the right position for the camera so the story can reveal itself, structuring the feature in two halves, and how a dialogue-free short film born partly out of COVID constraints became the proof of concept that unlocked TIFF, The New Yorker, and eventually A24. They also talk candidly about what the audience never sees: rough living conditions, long hours waiting, the specific agony of “the best thing happened, and we missed it,” and the slow but important work of earning trust, where listening comes before filming.They share influences that shaped them, including Miyazaki's sense of nature and modernity, Gus Van Sant's bravery with form, and John Cassavetes' belief in the energy of a set. The conversation closes on what it meant to experience Sundance as both a career peak and a personal milestone, getting engaged and then married during the festival. Advice to filmmakers: be tenacious when you know you need to tell a story, protect trust like it is part of the craft, and do not turn on each other when the pressure spikes.What Movies Are You Watching?This episode is brought to you by BeastGrip. When you're filming on your phone and need something solid, modular, and built for real productions - including 28 Years Later and Left Handed Girl - BeastGrip's rigs, lenses, and accessories are designed to hold up without slowing you down. If you're ready to level up your mobile workflow, visit BeastGrip.com and use coupon code PASTPRESENTFEATURE for 10 % off. Revival Hub is your guide to specialty screenings in Los Angeles - classics on 35mm, director Q&As, rare restorations, and indie gems you won't find on streaming. We connect moviegoers with over 200 venues across LA, from the major revival houses to the 20-seat microcinemas and more.Visit revivalhub.com to see what's playing this week. Acclaimed documentary ROADS OF FIRE is now available on Amazon, iTunes, and Fandango at home. Directed by Nathaniel Lezra, the film won best documentary at the 2025 Santa Barbara International Film Festival. The film examines the migrant crisis here in the States all the way down to Venezuela, and Academy Award nominee Diane Lane calls it "a must-see journey of human dignity." Roads of Fire - now on Amazon, iTunes, Fandango. Introducing the Past Present Feature Film Festival, a new showcase celebrating cinematic storytelling across time. From bold proof of concept shorts to stand out new films lighting up the circuit, to overlooked features that deserve another look. Sponsored by the Past Present Feature podcast and Leica Camera. Submit now at filmfreeway.com/PastPresentFeatureSupport the show Listen to all episodes on Spotify, Apple Podcasts, and more, as well as at www.pastpresentfeature.com. Like, subscribe, and follow us on our socials @pastpresentfeature The Past Present Feature Film Festival - Nov. 20-22, 2026 in Hollywood, CA - Submit at filmfreeway.com/PastPresentFeature
In this lively episode, Kris Yeo and Kyle Hennessy dive into a variety of topics, from the bizarre world of Joe Exotic's prison weddings to their own friendship dynamics and communication quirks. They explore humorous hypotheticals, nostalgic memories and their mutual love for Scary Movie, and even a strange phenomenon involving toenail shedding. The conversation is filled with laughter, personal anecdotes, and a deep understanding of each other's quirks, making for an entertaining and relatable episode.TakeawaysJoe Exotic's prison wedding highlights the absurdity of fame.Friendship dynamics can lead to humorous misunderstandings.Communication styles can create friction in relationships.Nostalgia plays a significant role in shaping our humor.Dark humor can be a bonding experience among friends.Personal quirks can lead to funny anecdotes.Shared memories strengthen friendships over time.Hypotheticals can reveal deeper insights into relationships.The importance of understanding each other's triggers.Weird bodily phenomena can spark interesting conversations.
So, let's dive right into the nitty-gritty of servant leadership, shall we? You know, it's all fun and games until you realize that a lack of accountability can turn those so-called “servant leaders” into untouchable demigods. We're not here for a morality contest, folks; we're all human, and that's the point. Today, we're breaking down the BE-COME framework—because, let's face it, who doesn't love a good acronym? It's all about starting fresh, connecting with our people, and keeping each other in check, all wrapped up in love. Because remember, the Church doesn't need flawless leaders; it needs ones who can own their mess-ups and show up for one another. So, stick around, and let's unpack how we can actually make accountability feel like a warm hug instead of a judgmental fist!Servant leadership is one of the most quoted leadership models in the Church. But if servant leadership is so central to our theology, why do we keep watching leaders fall?In this episode, we examine the dark side of servant leadership—not to tear down leaders, but to tell the truth so the Church can grow healthier.Drawing from a recent discipleship gathering called People of Grace, insights from John Wesley's class meetings, and the BE-COME discipleship framework taught by Sam Barber, this conversation explores why leadership without shared accountability eventually fails.We look at patterns behind recent ministry collapses, the role of isolation in leadership failure, and how churches can recover healthier structures rooted in grace, community, and accountability.Servant leadership works, but only when it is accountable.KEY THEMES• The difference between servant language and servant structure• Why isolation is the most common soil for leadership failure• John Wesley's model of mutual accountability• The BE-COME framework for discipleship• How the early church practiced shared leadership• Practical steps toward accountable leadership todaySCRIPTURE REFERENCESMark 10:42–45 — Whoever wants to be great must be servantJohn 13:1–17 — Jesus washes the disciples' feetMatthew 28:18–20 — The Great CommissionLuke 22:24–27 — Leadership as serviceActs 2:42–47 — Shared life in the early churchGalatians 6:1–2 — Bear one another's burdensJames 5:16 — Confess your sins to one anotherTakeaways:Wesley's concept of accountability in leadership isn't about control, it's about protection and growth.The BE-COME framework emphasizes the importance of community and personal accountability in servant leadership.Servant leadership without accountability can lead to disastrous outcomes, as seen in many high-profile ministry collapses.We can't ignore the reality that isolation distorts leadership and makes it easier for blind spots to grow.True accountability involves asking hard questions and having people who can challenge us without repercussions.The church needs leaders who are known and accountable, not just those who appear humble on the surface.Companies mentioned in this episode:Dynamic Church Planting InternationalGateway ChurchIHOP Kansas City
Join us in this episode as Garrett Lovejoy, SVP at Fortune Brands and General Manager of Connected Security, shares insights into the evolving landscape of smart locks, industry standards like Matter, Thread, and UWB, and the future of intelligent home automation. Discover what's next for connected security, design trends, and how AI is democratizing automation creation.In this episode:The impact of CES 2026 on smart home industry innovationsHow Matter, Thread, and UWB are transforming device connectivityThe role of AI in simplifying automation setup and personalizationDifferences and future plans for Yale, August, and other brands in the marketAdvances in retrofit solutions for smart blinds, shades, and other accessoriesChallenges with multi-home device management in apartments and condosEmerging smart lock technologies including UWB and AliroThe User Experience evolution with automatic unlocking and tap-based accessThe significance of design innovation and brand differentiationIndustry resilience and firmware update strategies for complex home systemsSend us your HomeKit questions and recommendations with the hashtag homekitinsider. Tweet and follow your host at:@andrew_osu on Twitter@andrewohara941 on ThreadsEmail me hereSponsored by:Gusto: Try Gusto today at https://gusto.com/homekit, and get three months free when you run your first payroll.Shopify: Sign up for a one-dollar-per-month trial period at: shopify.com/homekitHomeKit Insider YouTube ChannelSubscribe to the HomeKit Insider YouTube Channel and watch our episodes every week! Click here to subscribe.Links from the showMultiple Apple Home garage doors in CarPlayGreat projector automation from Aqara forumSwitchBot Wooden Air PurifierYale Assure 2 Lock with MatterYale Home WebsiteThose interested in sponsoring the show can reach out to us at: andrew@appleinsider.com
* Trigger warning - Prenancy after loss and motherhood discussed during this episode*In this episode of The Worst Girl Gang Ever, Bex, Laura, Anastasia Shubareva-Epshtein , and Anna Whitehouse (aka Mother Pukka) sit down for an honest, no-filter conversation about pregnancy, motherhood, and the parts we're so often expected to carry quietly - miscarriage, grief, and life after loss.Anastasia shares how her own journey through IVF and loss led to the creation of Carea, an app designed to support women through pregnancy in a way that reflects real life, not just milestones and happy endings. She talks about how many pregnancy apps fail women the moment things don't go to plan, leaving them feeling unseen and alone at a time when support matters most.Together, they explore the pressures placed on mothers to “bounce back”, the way postpartum struggles are minimised, and why silence around miscarriage causes so much harm. The conversation centres on the power of community - of being believed, understood, and supported without having to explain yourself.This episode is a reminder that motherhood isn't one-size-fits-all, grief doesn't follow a timeline, and healing starts when we're allowed to tell the truth.
Foster care doesn't just enter a home; it enters a marriage. With Valentine's Day approaching, this episode of Restoried opens an honest conversation about how foster care changes marriage when court dates, therapy, and survival replace date nights and quiet conversations. Lisa is joined by her husband JJ as they reflect on eight years as licensed foster parents and over a decade supporting children and families in foster care. They share how foster care changed their marriage, from communication and emotional connection to dividing responsibilities and making decisions together. They discuss real challenges, including exhaustion, grief, spiritual warfare, and times of being out of sync. They explore how spouses carry weight differently and how tension can arise if not addressed. The conversation also covers practical realities, including managing responsibilities, navigating caseworker roles, and checking in regularly to avoid resentment. At the heart of the episode is hope. Lisa and JJ share how God used the hardest seasons to strengthen their marriage. Through prayer, shared heartbreak, and choosing each other daily, their marriage grew deeper and anchored in Christ. Episode Highlights: Foster care impacts marriage Balancing grief and logistics Communication is key Shared hardships strengthen love Relying on God Intentional connection matters Practical tips for parents Find More on Hope Bridge: Visit Our Website Follow us on Instagram Follow us on Facebook Register for Mobilize Ohio 2026
In this episode, the conversation explores what women commonly experience after 40—hormonal shifts, identity changes, and new health concerns—and how these changes impact relationships. Rather than blaming anyone, the focus is on insight, understanding, and practical ways couples can support each other, regulate their nervous systems together, and build deeper, safer connection in this season of life. Key Points: Women 40+ face hormone changes and lower libido Nervous system becomes more sensitive and reactive Identity shifts as kids need mom less Need more rest, quiet, and alone time This is recalibration, not rejection of partners Women need safety, support, and listening, not fixing Women are allowed to change and ask for what they need Shared nervous system regulation (walks, breathwork, rituals) Prioritize sleep and a calm, tech-free bedroom Being rested reduces conflict and reactivity Take ownership of health: food, movement, sleep Nurture nourishing relationships; drop draining ones Reduce phone use and comparison to others Google searches show concern about perimenopause and hormones Women worry about heart, bone, sleep, and chronic health Relationships in your 40s will (and should) look different With communication, this season can deepen connection Quality of relationships shapes quality of life Connect with Anna: Email: annamarie@happywholeyou.com / info@HappyWholeYou.com Website: www.happywholeyou.com / https://linktr.ee/happywholeyou Personal Website: www.DrAnnaMarie.com Instagram: @happywholeyou Personal Instagram: @Dr.Anna.Marie Facebook: Happy Whole You LinkedIn: Anna Marie Frank Venmo: @happywholeyou
SummaryIn this episode, the guys welcome back Greg Rudiger (Resilient Another Day / The Radcast) for a real, needed conversation about mental health in the fire service—and why we must start talking to recruits and new members early, not just after 15–20 years of calls, stress, and life piling up. The crew covers how peer support, resiliency tools, spirituality, and the “power of the pause” can help firefighters stay in the fight—at work, at home, and into retirement.Greg returns (first appeared on Episode 75) for Episode 145.Matt sets the tone: firefighters are killing themselves—and we need to stop acting like it's not real.The crew agrees this topic should be in fire academies nationwide, not treated as an afterthought.Patreon updatePatreon is growing fast: 12 members (with a recent surge of new sign-ups).The team discusses possible perks like watching the podcast live during recording via Riverside.New subscribers shout-outsTyler Carson (free member)Tyler AdamsJesus & SophiaSocial FD (paid supporter)MerchNew flagship merch drop: shirts/hoodies/hats/long sleeves + specialty designs.Discount code mentioned: “Episode144” for 15% off (limited time).SponsorsUnkie's SeasoningsBurn Box / FD Collectors ClubPlus love for Blue Collar FiremenGreg breaks down the shift toward a proactive approach:If firefighter survival training is 75% prevention, why isn't mental health training the same?They're teaching recruits common language + tools before they ever hit the street:Stress continuum (blue/green/yellow/orange/red)Breath work / mindfulnessWork-life balance, sleep, nutrition, exercisePeer support resources and appsThe goal: normalize “Cap, I'm not okay” and make it safe to say.Brian brings up the tension:Some firefighters reject the mental health conversation as “victim mindset.”Greg responds: it's not about weakness—it's about leading with love, listening, and meeting the human.Also discussed: the “weaponization” concern—people claiming mental health issues to avoid accountability—without dismissing anyone who truly needs help.Greg's point hits hard:You don't carry one tool on the rig—you carry a toolbox.Same for wellness: breath work might work today, but tomorrow it might be running, faith, calling a buddy, stretching, ocean time, etc.The theme: sometimes you have to sit in the uncomfortable long enough to move through it.The crew emphasizes not being afraid of silence—on the mic and in real life.The pause helps you:respond instead of reactrecognize what you're feelingBrian shares scripture and the Footprints poem to underline the spiritual dimension:Faith isn't “religion as performance”—it's spiritual grounding and support.Greg ties it into wellness: the spiritual pillar is often the missing piece.Shared theme: we're not meant to carry it alone.Freddy raises a huge point:Retirement can be dangerous for mental health—loss of structure, identity shift, isolation.Greg explains what they're doing:retiree peer support groupsintentional check-ins (personal phone/email, not “department HR”)spouse inclusionmonthly breakfasts and continued connectionMatt addresses a correction from a listener regarding PAH exposure discussion from Episode 144:Clarifies the study measured urinary metabolites, not dermal skin measurements.Reinforces the key takeaway: SCBA use (even for engineers in the hot/warm zone) reduces exposure.Comment: How are you “par checking” your people?What tools are you using—peer support, faith, exercise, breath work, counseling, retirement groups?If you're struggling: reach out to someone (your crew, peer support, or the podcast team).Merch: https://the-cool-fireman.myshopify.com/collectionsGreg / RAD: resilientanotherday.comSocial: @stay.rad10 (IG/TikTok)Greg RudigerFounder: Resilient Another Day (RAD)Co-host: The RadcastOffers: resiliency training + help building peer support frameworks (“can't be a prophet in your own town” support)
Support the show
Session: 5 Date of First Use: February 22, 2025 Title: Jesus Shared The Point: Our daily connections with people can be opportunities to share Christ. The Bible Meets Life: America is a diverse country of cultures, ethnicities, worldviews, and preferences. It is our human nature to gravitate toward people who are most like us, but the beauty of the gospel is that it is not only for people “just like us.” The gospel speaks to any culture, any time, and any place. In Acts 17, Paul gave us an example of how to communicate the gospel in any situation. Session Passage: Acts 17:16-18,22-23,30-31
Recprded February 5th, 2025. In this conversation, Prof Patrick Geoghegan (Director of the Trinity Long Room Hub) and Christian Du Pont (Burns Librarian, Boston College) explore the long-standing and evolving collaborations between Trinity College Dublin and Boston College, with a particular focus on the legacy of Cuala Press and the work of the Yeats sisters. They discuss how shared collections, archival partnerships, and transatlantic relationships help preserve and reinterpret Ireland's literary and artistic heritage, shedding light on the cultural significance of Cuala Press publications, book design, craftsmanship, and the broader networks that continue to shape Irish studies today. A thoughtful exploration of how libraries, archives, and institutions collaborate across borders to keep literary history alive and accessible. Learn more at www.tcd.ie/trinitylongroomhub
The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. Being a cop in Baltimore, Maryland has never been just a job. For generations of officers, it has been a test of resolve carried out in one of America's most violent cities, where the murder of police officers was not an abstract fear, but a lived reality. The streets remembered everything, even when time moved on. The Law Enforcement Talk Radio Show and Podcast social media like their Facebook , Instagram , LinkedIn , Medium and other social media platforms. For John Jay Wiley, the host of the La Enforcement Talk Radio Show and Podcast, also a retired Baltimore police officer, that reality resurfaced decades later through a candid conversation with retired Baltimore Police Detective Gary McLhinney. Shared across Facebook, Instagram, YouTube, Spotify, Apple, and other Social Media and Media platforms as part of a Podcast, the discussion centered on a crime that forever shaped their careers: the murder of Baltimore Police Officer Vincent J. Adolfo. This Special Episode of the Podcast is available and shared for free on the Law Enforcement Talk Radio Show and Podcast website, also on Apple Podcasts, Spotify, YouTube and most major podcast platforms. “This was something I carried with me from 1985,” John Jay Wiley, the retired Baltimore Police Sergeant said. “It stayed buried, but it was never gone.” The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. Supporting articles about this and much more from Law Enforcement Talk Radio Show and Podcast in platforms like Medium , Blogspot and Linkedin . The Murder of Police Officer Vincent J. Adolfo On November 18, 1985, Officer Vincent J. Adolfo of the Baltimore Police Department was performing routine police work in a city already known for violence. That night, officers attempted to stop a stolen vehicle. The suspect vehicle rammed another patrol car, and all occupants fled on foot. Officer Adolfo pursued one suspect into Iron Alley. “He thought the suspect was surrendering,” the retired officer explained. “That's what makes this so hard to accept.” As Officer Adolfo approached, the suspect suddenly produced a .357 caliber handgun and opened fire. Officer Adolfo was struck in both the chest and the back. At the time, his department-issued ballistic vest contained only a front panel, capable of stopping rounds up to .38 caliber. Available for free on the Law Enforcement Talk Radio Show and Podcast website, also on Apple Podcasts, Spotify, Youtube and most major Podcast networks. “The equipment wasn't what it is today,” Gary McLhinney said. “He never had a chance.” Officer Adolfo died from his wounds, becoming another name etched into Baltimore's long and painful history of officers killed in the line of duty. The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. The suspect fled the state and was later apprehended in Oklahoma. He was extradited back to Maryland, convicted, and ultimately executed in 1997 for the murder. A Crime That Followed Careers for Decades The murder of Officer Adolfo connected two men who would later reflect on their careers from retirement, men who had never worked together, yet shared the same burden. Retired Baltimore Police Detective Gary McLhinney played a critical role in helping his former colleague, radio and odcast host confront unresolved guilt and regret. Look for The Law Enforcement Talk Radio Show and Podcast on social media like their Facebook , Instagram , LinkedIn , Medium and other social media platforms. “Gary helped me finally put things to rest,” John Jay Wiley said. “He understood because he lived it too.” Both men served during an era when killing police officers in Baltimore was not rare. It was a time when violent crime surged, fueled first by heroin in the 1970s and later by crack cocaine in the 1980s and early 1990s. “You didn't count years by calendars,” Gary McLhinney said. “You counted them by funerals.” Policing One of America's Most Violent Cities Baltimore City, an independent city under the Maryland Constitution since 1851, has long struggled with crime rates well above the national average. With a population of more than 585,000 at the 2020 census and part of a metropolitan area exceeding 2.8 million residents, Baltimore's challenges have been both urban and systemic. The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. Available for free on their website and streaming on Apple Podcasts, Spotify, Youtube and other podcast platforms. In 1993, the city recorded a peak of 353 homicides, during a period when the population was nearly 130,000 higher than it is today. In 2019, Baltimore recorded 348 killings, nearly matching that grim record. Though the city saw a sharp decline to 201 homicides in 2024, the scars of decades of violence remain. “These numbers don't tell the whole story,” Gary McLhinney said. “They don't show the officers who went home different, or didn't go home at all.” The decline in homicide rates in 2011, when killings dipped below 200 for the first time since 1978, was credited to focused enforcement on repeat violent offenders and increased community engagement. But the gains proved fragile. Homicides climbed again in 2012 and 2013, defying national trends and reinforcing the unpredictable nature of violent crime in Baltimore. Gary McLhinney's Career and Leadership Gary McLhinney came from a family of firefighters but chose a different calling. “He wanted to be a Baltimore City police officer,” his colleague said. “That's where his heart was.” McLhinney loved the job and the people he served alongside. After retiring from the Baltimore Police Department, he was appointed Chief of the Maryland Transportation Authority Police. In that role, he oversaw security for the Port of Baltimore, BWI Marshall Airport, and the state's bridges, tunnels, and toll roads, particularly during the tense years following the September 11 terrorist attacks. The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. It is discussed across News platforms and shared on Facebook, Instagram, LinkedIn, Apple, and Spotify, where true crime audiences continue to get their content. “Those were years where the weight of responsibility never let up,” McLhinney said. “But Baltimore prepared us for that.” Preserving the Stories in a Book McLhinney later turned his attention to preserving the stories of officers lost in the line of duty. Along with renowned journalist and author Kevin Cowherd, he co-wrote Bleeding Blue: Four Decades Policing the Violent City of Baltimore. “The book isn't about glory,” McLhinney said. “It's about remembering the men and women who paid the ultimate price.” The Book documents decades of violence, sacrifice, and resilience within the Baltimore Police Department. Portions of the proceeds benefit the Signal 13 Foundation, a nonprofit established in 1983 to support Baltimore police officers and their families through financial hardship grants and scholarships. The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. You can find the show on Facebook, Instagram, Pinterest, X (formerly Twitter), and LinkedIn, as well as read companion articles and updates on Medium, Blogspot, YouTube, and even IMDB. Additional proceeds support Concerns of Police Survivors (C.O.P.S.), a national 501(c)(3) organization founded in 1984 that now serves more than 87,000 survivors nationwide. Supporting Survivors After the Headlines Fade C.O.P.S. provides peer support, counseling, scholarships, survivor weekends, youth camps, trial and parole support, and training for law enforcement agencies on how to respond after the loss of an officer. “The agency response matters,” the retired officer said. “It shapes how families survive the aftermath.” C.O.P.S. chapters operate in all 50 states, with national survivor programs administered from Camdenton, Missouri. Funding comes from donations, grants, and continued public awareness—often driven by News, Podcast, and Social Media exposure. Available for free on the Law Enforcement Talk Radio Show and Podcast website, also on Apple Podcasts, Spotify, Youtube and most major Podcast networks. Why These Stories Still Matter Today, these conversations live on across Facebook, Instagram, LinkedIn, YouTube, Spotify, Apple, and other Media platforms, not as nostalgia, but as testimony. “The murder of police officers doesn't end with the trial,” the retired officer said. “It follows careers, families, and cities for generations.” The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. By revisiting the murder of Officer Vincent J. Adolfo, the realities of policing Baltimore, and the bonds formed through shared trauma, this story serves as both remembrance and warning. It honors the fallen, supports the living, and reminds the public that behind every statistic is a name, a badge, and a life that mattered. Find a wide variety of great podcasts online at The Podcast Zone Facebook Page , look for the one with the bright green logo. Be sure to check out our website . Be sure to follow us on X , Instagram , Facebook, Pinterest, Linkedin and other social media platforms for the latest episodes and news. Background song Hurricane is used with permission from the band Dark Horse Flyer. You can contact John J. “Jay” Wiley by email at Jay@letradio.com , or learn more about him on their website . The Murder of Police, Our Careers in Baltimore, Maryland. Special Episode. Attributions Amazon Signal 13 Foundation Concerns of Police Survivors C.O.P.S. Officer Down Memorial Page Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week - Kevin and Sam recap the Nintendo Direct Partner Showcase, go over PlayStation Q3 Finances, share our thoughts about the new Horizon game and chat about Valve delaying the Steam Machine.Time Stamps:0:00 Intro & Whatcha Playing27:00 Nintendo Direct Recap52:00 PlayStation Financials1:02:00 Horizon Hunters Gathering1:19:00 Steam Machine DelayedSupport Us on Patreon: https://www.patreon.com/SaveTheGameMediaFollow Us:STGM: https://bsky.app/profile/savethegamemedia.bsky.socialKevin: https://bsky.app/profile/themuff1nmon.bsky.socialSam: https://bsky.app/profile/samheaney.bsky.socialJoin our Discord: https://discord.gg/89rMmfzmqwSupport our Extra Life: https://www.extra-life.org/participant/SaveTheGameMediaAll music created by the amazing Purple Monkey: https://linktr.ee/pme.jib#Nintendo #NintendoDirect #Orbitals #PS5 #PlayStation #Steam #Valve #SteamMachine #Gaming #Podcast
We hope this message in our series "Exodus - Encountering God" is impactful and uplifting in your walk with Jesus!If you would like to dive deeper, check out the link below.Stay in touch with us on Instagram | Facebook | Spotify - True Hope ChurchCheck out our Website:https://www.truehopechurch.org
"Shared experience begins with shared humanity." Hebrews 2:16-18
The survival and restoration of the nation of Israel is one of the most outstanding and thought-provoking confirmations of Bible prophecy. In this Watchman Report, we explore the inspiring and exceptional history of God's chosen people. From their ancient promises to Abraham, through centuries of dispersion and persecution, to their miraculous regathering in 1948, the story of Israel stands as a powerful witness to the reality of God and the truth of His Word. This presentation offers an insightful, Scripture-based exposition of why Israel exists today against all odds and what this means for our understanding of biblical prophecy.**Chapters:**00:00 - Introduction00:28 - The Modern Phenomenon of Israel01:08 - The Ancient Promise to Abraham02:20 - The Warning of Scattering03:32 - Disobedience and Exile04:09 - Persecution and Survival Through History05:25 - The Holocaust and National Rebirth06:30 - The War for Independence07:18 - A Miraculous Victory08:05 - Modern Conflicts and Survival08:27 - Bible Prophecy Fulfilled09:51 - Israel as God's Witness10:35 - The Promise of Future Fulfillment11:11 - Conclusion**Bible Verses Featured:**
What many are calling a new low for Trump, and those are the Republicans, after a video post that included racist imagery of the Obamas as apes. Plus, the FBI surges teams to Arizona in the search for Nancy Guthrie. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Morning news headlines, feds and Nike, will equal opportunity really mean that? Shared my experience with eeoc 35 years ago. State Sen. Noah Robinson updates the legislative session, what about the tax referendum, is there fight in the GOP?
What a whopper I have for you today! This one is especially poignant if you have Saturn in Aries because your (OUR) return is coming babe. February 13th, Saturn officially re enters Aries after 29 years absentia. In today's episode we explore the Kronos wound, adultism - the resentment adults feel towards the young- and generally how to avoid becoming a crotchety old person and instead someone at peace with time! Shared @ end: did you take my dare? Share in the comments how it went! Or as always feel free to email me at JENAOCEAN@GMAIL.comI'm off Instagram right now talk to re-establish my relationship with the ap through some time apart. In the MEANTIME: - you're invited to my show, When the Rain Stops Falling tickets linked there!- my episode of Naked and Afraid comes out March 1st! Preview is linked here Like this? Make sure you're Following the show! Next week I have a special guest on for V Day! You won't want to miss. Clue: Word Witch
White House removes a video shared by President Donald Trump that included images showing former President Barack Obama and former first lady Michelle Obama as apes, after Republican and Democratic lawmakers denounced the posting as "racist" and "offensive"; Attorney General Pam Bondi announces the arrest of a suspect in the 2012 terrorist attack on the U.S. Embassy in Benghazi, Libya that killed four Americans; U.S. holds indirect talks with Iran in Oman, but no major breakthroughs announced; Federal Reserve Vice Chair Philip Jefferson says is "cautiously optimistic" about the 2026 economic outlook, expects growth to stay slightly above recent trends, the labor market to stabilize and inflation heading back down to the Fed's 2% target; On Wall Street, Dow jumps 1,000 points to close about 50,000 for the first time; Democrats in New Jersey call out President Trump for holding up billions of dollars for the Gateway Tunnel transit project, reportedly because Democrats did not agree to name Penn Station in NYC and Dulles Airport in Virginia after him; former House Speaker Kevin McCarthy (R-CA) speaks at a Ronald Reagan 115th birthday celebration in California about Reaganomics; Vice President JD Vance meets with the Italian Prime Minister Giorgia Meloni ahead of the Winter Olympics opening ceremony; Maryland's Senate takes an official on who will win Sunday's Super Bowl. Learn more about your ad choices. Visit megaphone.fm/adchoices
New @greenpillnet pod out today!
Ryan Milligan is the VP of Sales at QuotaPath, where he leads sales within a unified go-to-market organization alongside marketing and revenue operations. His career spans Wayfair, data science, performance marketing, and RevOps, giving him a systems-driven approach to sales leadership. Ryan focuses on using clear incentives, aligned teams, and data-backed planning to drive revenue retention and sustainable growth. Ryan's catalyst has been learning how to connect systems, people, and incentives to create better outcomes for both reps and the business. Three Key Quotes from Ryan Milligan On clarity and results "What follows that clarity is a level of focus, and that focused attention actually leads to results." On compensation as a tool "You can actually use the comp plan to change how your team's performing and operating daily." On career growth "A lot of the biggest amount of growth I've had in my career has been in identifying those projects that don't live in the bullet point list of things that they hired me to do." Ryan Milligan, VP of Sales at QuotaPath, explains how RevOps thinking can be a catalyst for better sales leadership, smarter quota setting, and stronger revenue retention. This episode covers comp plan design, cross-functional alignment, rep confidence, and how data-driven sales teams win in today's market. VP of Sales, quota setting, RevOps, revenue retention, sales leadership, compensation plans Five Key Takeaways: Find Your Catalyst 1. Compensation Can Be a Catalyst for Sales Behavior Comp plans should guide daily actions Simple plans drive clearer decisions Bigger incentives change real behavior 2. Better Customers Create Better Retention Sales shapes retention before the deal closes Quality pipeline matters more than volume Incentives should reward long-term fit 3. Career Growth Is Not a Straight Line Raise your hand for cross-team projects Small initiatives build big skills Curiosity creates new opportunities 4. RevOps Thinking Makes Better Sales Leaders Look at the whole system, not just deals Data helps remove emotion from decisions Neutral thinking builds trust across teams 5. Alignment Is the Catalyst for Scale Assume positive intent across teams Talk directly to solve problems faster Shared goals reduce silos
See omnystudio.com/listener for privacy information.
What do people who recover from cancer, autoimmune disease, and neurological illness all have in common at the table?In this episode of Renegade Remission, we explore the universal food themes that show up again and again in real remission and long-term healing stories, regardless of diagnosis. From spontaneous remission literature to functional medicine case reports to survivor testimonies, the details may differ, but the nutritional patterns are strikingly consistent.You will hear a medically documented case where diet played a central role in reversing severe autoimmune disease, and you will learn why these same food principles support healing across cancer, MS, lupus, heart disease, and other chronic or terminal conditions. This is not about a fad diet or rigid rules. It is about how the body responds when inflammation lowers, metabolism stabilizes, and the immune system finally has the resources it needs to do its job.In this episode, you will discover:The six nutrition patterns shared across real remission and unexpected recovery stories Why ultra-processed foods consistently show up as a barrier to healing How nutrient density supports immune accuracy, mitochondrial function, and cellular repair Why gut health and microbiome diversity matter across every major disease category How stabilizing blood sugar reduces inflammation and stress hormones Why gentle fasting and digestive rest appear so often in healing journeysMost importantly, you will learn why healing diets are not about perfection or restriction, but about reducing burden and increasing nourishment in ways your body can actually sustain.Listen now to understand what real remission stories have in common when it comes to food, and how you can begin supporting your own healing biology one simple, compassionate step at a time. This episode will help you move away from confusion and toward clarity, without overwhelm or pressure.DisclaimerThis podcast is for educational purposes only and does not offer medical advice. Consult your licensed healthcare provider before making any changes to your treatment or health regimen. Reliance on any information provided is solely at your own risk.This podcast explores stories and science around ALS, dementia, MS, cancer, mind body recovery, healing, functional medicine, heart disease, regression, remission, integrative medicine, autoimmune conditions, chronic illness, terminal disease, terminal illness, holistic health, quality of life, alternative medicine, natural healing, lifestyle medicine, and remission from cancer, offering hope and insights for those seeking resilience and renewal.
Welcome to the RPGBOT.Podcast, where today's character creation lesson begins with basic geometry, escalates into psychic powers, and somehow ends with a pacifist circus bear being seriously considered as a build option. In this episode, we take the gloves off and actually make characters for Pulp Cthulhu—choosing archetypes, rolling stats, hoarding skill points like goblins, and discovering that if you roll too well, you might accidentally invent the world's first telepathic himbo artist. If you've ever wondered how Call of Cthulhu character creation becomes fast, fun, and dangerously powerful, this is where the pulp really starts to flow. The D8 goes in the D8 hole. Show Notes This episode walks step-by-step through Pulp Cthulhu character creation, showing how investigators are built to be tougher, broader, and far more cinematic than their classic Call of Cthulhu counterparts. Ash guides Tyler and Randall through the full process—then breaks it down into a Quick & Dirty method that can get players to the table in minutes. Step 1: Choose an Archetype Archetypes replace traditional "classes" and are rooted in classic pulp fiction roles: Mystic (psychic powers, occult insight, vibes) Egghead (engineers, scientists, gadgeteers) Two-Fisted, Swashbuckler, Femme Fatale, Bon Vivant, and more Each archetype: Defines a core characteristic Grants bonus archetype skills Suggests traits, occupations, and story hooks This approach encourages concept-first design, letting the character idea drive the mechanics instead of the other way around. Step 2: Generate Characteristics Attributes are rolled using the familiar D100 roll-under system, but with a key twist: Core characteristic = 1d6 + 13 × 5 (expect very high numbers) Other stats use 3d6×5 or 2d6+6×5 High pulp means exceptional competence The result? Characters who feel powerful immediately—sometimes too powerful, leading to delightful accidents like rolling: Incredible Power Solid looks Questionable intelligence (Yes, the "himbo build" is real.) Step 3: Talents (High Pulp Edition) Because this game is running High Pulp, characters receive four talents instead of two. Talents are drawn from four categories: Physical Mental Combat Miscellaneous Highlights from the episode include: Psychic Powers Arcane Insight Weird Science Animal Companion (responsibly downgraded from "bear" to "bear-adjacent dog") Talents dramatically define how characters play and reinforce pulp action over fragile realism. Step 4: Occupation & Skill Points Occupations grant massive skill point pools, often hundreds of points: Skills start with base percentages Occupational skills come first Archetype skills add another 100 points Personal interest skills add even more The result is wide, competent characters instead of hyper-specialized glass cannons. The episode includes practical advice: Avoid pushing every skill to 95 Aim for flexibility, not just peak numbers Remember Credit Rating is mandatory and matters in play Step 5: Backstory (Fast but Meaningful) Instead of long essays, Pulp Cthulhu uses structured prompts: Personal description (biased, first-person) Ideology and beliefs Significant people Treasured possessions Traits Random tables spark instant character hooks, like: Idolizing Nikola Tesla Carrying calipers as a grounding object Shared trauma bonds Risk-taking or unreliable personalities One key backstory element becomes your Sanity anchor, helping characters recover from mental trauma. Quick & Dirty Character Creation Ash closes the episode with a streamlined alternative: Assign preset stat values Pick talents Select skills from fixed arrays Roll backstory details Start playing immediately Perfect for one-shots, convention play, or groups eager to punch cultists now, not in two hours. Key Takeaways Pulp Cthulhu character creation is fast, flexible, and cinematic Archetypes replace classes with strong narrative identity High Pulp characters start powerful and stay relevant Talents are the heart of customization Skill points are plentiful—breadth is rewarded Structured backstory tools create instant roleplay hooks The Quick & Dirty method gets you playing in minutes Yes, you can accidentally build a psychic himbo—and that's a feature Welcome to the RPGBOT Podcast. If you love Dungeons & Dragons, Pathfinder, and tabletop RPGs, this is the podcast for you. Support the show for free: Rate and review us on Apple Podcasts, Spotify, or any podcast app. It helps new listeners find the best RPG podcast for D&D and Pathfinder players. Level up your experience: Join us on Patreon to unlock ad-free access to RPGBOT.net and the RPGBOT Podcast, chat with us and the community on the RPGBOT Discord, and jump into live-streamed RPG podcast recordings. Support while you shop: Use our Amazon affiliate link at https://amzn.to/3NwElxQ and help us keep building tools and guides for the RPG community. Meet the Hosts Tyler Kamstra – Master of mechanics, seeing the Pathfinder action economy like Neo in the Matrix. Randall James – Lore buff and technologist, always ready to debate which Lord of the Rings edition reigns supreme. Ash Ely – Resident cynic, chaos agent, and AI's worst nightmare, bringing pure table-flipping RPG podcast energy. Join the RPGBOT team where fantasy roleplaying meets real strategy, sarcasm, and community chaos. How to Find Us: In-depth articles, guides, handbooks, reviews, news on Tabletop Role Playing at RPGBOT.net Tyler Kamstra BlueSky: @rpgbot.net TikTok: @RPGBOTDOTNET Ash Ely Professional Game Master on StartPlaying.Games BlueSky: @GravenAshes YouTube: @ashravenmedia Randall James BlueSky: @GrimoireRPG Amateurjack.com Read Melancon: A Grimoire Tale (affiliate link) Producer Dan @Lzr_illuminati
Send us a textWe explain how pelvic ultrasound, saline infusion sonography and ORADS scoring turn confusing reports into clear next steps for cysts, bleeding, and polyps. We share when to watch, when to act and why expert interpretation reduces anxiety and unnecessary tests.• Types of pelvic ultrasound and when each is used• How saline infusion sonography reveals cavity lesions• Benefits of gynecologic imagers vs general radiology• Why image quality and timing affect accuracy• Preparing for scans, full bladder and cycle days• Ovarian cyst basics and common myths• ORADS scoring and what each level implies• Postmenopausal bleeding thresholds and actions• When hysteroscopy is the gold standard• Managing cervical stenosis and procedural comfort• New tech: HyCoSy for tubal patency• Shared decision-making and practical follow-upSupport the show
We open this evening's proceedings with ‘The Grand Mausoleum', an original story by KeyDeeDee, kindly shared directly with me for the express purpose of having me exclusively narrate it here for you all.. https://www.reddit.com/user/KeyDeeDee/ Our second scary story is ‘There's Something Between the Gears', an original work by Whitix; a story shared with me via the Creepypasta Wiki and read here under the conditions of the CC-BY-SA license: https://creepypasta.fandom.com/wiki/User:Whitix We continue with ‘The Treatment of Aaron Nelms', an original work by Carlos Pandiella; Shared directly with me for the express purpose of having me read it here for you all: https://www.reddit.com/user/Panda_Tech_Support/Today's next offering is ‘Why Vera Doesn't Jog at Night Anymore', an original story by Scribaphobia, kindly shared directly with me for the express purpose of having me exclusively narrate it here for you all. https://www.reddit.com/user/scribaphobia/ Tonight's fifth story is ‘To the Future Buyer of This House, You Need to Know Why The Closet Door is Boarded Shut.', an original story by J.P. Marley, again kindly shared with me for the express purpose of having me exclusively narrate it here for you all. https://www.reddit.com/user/jpmarley/ Today's fantastic penultimate offering is ‘Killing My Childhood Monster Was Easier Than I Thought', an original work by nerdxcorexneal, once morekindly shared directly with me for the express purpose of having me exclusively narrate it here for you all. https://www.reddit.com/user/nerdxcorexneal/ Today's final phenomenal story is ‘I Painted Something That Shouldn't Exist', an original work by Amelie C. Langlois, again kindly shared directly with me for the express purpose of having me exclusively narrate it here for you all. https://www.reddit.com/user/AmelieCLanglois/
If you'd like to feel some hope and know that you are among people who, like ourselves, are searching for solutions that support everyone, this is your conversation.Ashland, Massachusetts Chief of Police Cara Rossi spent a wonderful hour talking with me today, and I'm so excited to bring you our conversation.Shared with lots of gratitude to Cara – and to you for listening and for sharing your questions. (We had lots of great questions! Thank you so much.)Nuts & BoltsQuestions and Thoughts: If you've got further questions or thoughts for Chief Rossi, please direct them to me at my contact page and I will pass them along: https://kaylockkolp.com/contactRebels for Peace: Cara and I briefly talk about the wonderful youth led nonprofit supporting South Side Chicago, Rebels for Peace. See more about them and what they do at their lovely website, which I honestly sometimes visit just to feel their amazing vibe, https://www.rebelsforpeace.org/My Brain & Me Workshop: I am co-facilitating a workshop along with wonderful coach and very dear colleague Sarah-Jayne Juniper, coming up at the end of February. It's about making peace with our internal operating system. This feels important now, more than ever. Click here to read more about it.ACWB in YouTube: View the Art Creativity & Wellbeing show in YouTube here.Thank you for being here. Sending you much love – KayArt Creativity & Wellbeing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit kaylockkolp.substack.com/subscribe
We're always looking for ways to save you all money, and with Valentine's Day right around the corner, it's so easy to go over budget, that is, if you even had one anyway. For those trying to dial back a little on spending this year, we're sharing a few ways to celebrate Valentine's Day without having to take out that credit card. Links: Check out TCU University for financial education tips and resources! Follow us on Facebook, Instagram and Twitter! Learn more about Triangle Credit Union Transcript: Welcome to Money Tip Tuesday from the Making Money Personal podcast. Valentine's Day often comes with a lot of pressure—lavish dinners, expensive gifts, and grand gestures that can leave your wallet feeling less than romantic. But here's the truth: love doesn't have to come with a price tag. In fact, some of the most meaningful ways to show you care cost absolutely nothing. If you're looking to celebrate without swiping your credit card or maybe minimizing its use, here are five heartfelt ideas that prove thoughtfulness beats extravagance every time. The Power of Personal Touch Nothing says “I love you” quite like words straight from the heart. Instead of buying a pricey card, write a heartfelt letter or even a poem. Share your favorite memories, what you appreciate most about your partner, and your hopes for the future. Often it's easy to splurge on candies or flowers, but if you're cutting back on buying things this year, making something from the heart can be just as nice. Trying something like a handwritten note feels personal and timeless—something they can treasure for years. Besides a handwritten letter or note, crafting something unique from scratch can be just as meaningful. For the creative folks, browse through ideas online from a site like Pinterest where you'll find plenty of thoughtful homemade Valentine's crafts and gifts for your special someone. Quality Time Over Price Tags Sometimes, the best gift is your undivided attention. Plan a tech-free evening where you both disconnect from screens and focus on each other. Screens are often distractions from the present moment, robbing us of the true value that comes with being together. Make the holiday more special by offering your undivided attention and just spending time together. If you're unsure what to do, try an idea like cooking a meal together using what you already have in the pantry. You could also take a walk under the stars and talk about your dreams, or you might enjoy dusting off a board game or card deck for some friendly competition. Quality time strengthens emotional bonds and creates lasting memories—no receipt required. Acts of Service Speak Volumes Love isn't just about words; it's about actions. Doing something that makes your partner's life easier can be incredibly romantic. There are many times in life when acts of service are the perfect gift. For those who are busy and don't have time to tackle common daily tasks, they can build up and cause much uneasiness or stress. For many loved ones this Valentine's Day, something simple like an act of service might just be the perfect gift they'd been hoping for. One idea could be to make your significant other breakfast in bed. You could tackle a chore they've been dreading or even organize a space they use often, like their desk or closet. To up the stakes and gain points, pick one task the night before and surprise them by morning. Acts of service are beautiful gifts because they show thoughtfulness and effort, which often means more than any store-bought gift. Create Something Together Shared creativity can be a powerful bonding experience. Instead of buying something, make something together. If you know your significant other is crafty and values building or creating, this may be the perfect Valentine's Day gift. You could curate a playlist of songs that remind you of each other. You might enjoy baking cookies or trying a new recipe together or you could even start a photo album or scrapbook of your favorite moments. Collaborative projects create fun, laughter, and a sense of accomplishment—plus, you'll have a keepsake to look back on. Memory-Making Experiences Experiences often outshine material gifts. Plan a free adventure that gets you out of your routine. Depending on where you live, this option might be teeming with possibilities. If the weather is good, go for a nature walk or hike in a local park. Check out free museum days or community events going on in the area. If you'd prefer not to leave the house, have a picnic at home or in your backyard with homemade snacks. Shared experiences deepen your connection, give you stories to tell for years to come, and will be worth their weight in memories. Love isn't measured in dollars—it's measured in effort, thoughtfulness, and time. This Valentine's Day, skip the stress of overspending and focus on what really matters: making your partner feel valued and appreciated. Try one (or all) of these ideas and see how meaningful a no-cost celebration can be. Have a budget-friendly Valentine's idea not mentioned in this tip? Go ahead and share it with your family, friends, or on social media—you might inspire someone else to celebrate love without breaking the bank. If there are any other tips or topics you'd like us to cover, let us know at tcupodcast@trianglecu.org. Also, remember to like and follow our Making Money Personal Facebook and Instagram to share your thoughts. Finally, remember to look for our sponsor, Triangle Credit Union, on Facebook and LinkedIn. Thanks for listening to today's Money Tip Tuesday. Check out our other tips and episodes on the Making Money Personal podcast.
Sharing joy isn't just a phrase. It's not a slogan. It's a lived experience. It's what happens when someone who once received help becomes someone who helps. It's what happens when hope doesn't stop at relief—but continues through healing, takes root, and then moves outward. Help. Healing. Hope. Helping. That full circle—that's what this season is about. Over the coming weeks, you'll hear stories from people who know that circle intimately. People who came to The Salvation Army at moments when life felt overwhelming, uncertain or broken. People who were met with care, dignity and compassion—and who are now giving back in meaningful, often very ordinary ways. These aren't glossy stories. They're honest ones. Stories of second chances that took time. Stories of dignity restored slowly. Stories where joy didn't erase the hard parts—but grew alongside them. And that matters, because joy isn't about pretending life is easy. Joy is about meaning. It's about discovering that even in the middle of difficulty, something good can take root. EPISODE SHOWNOTES: Read more. FIND YOUR STORY. Get the email course. BE AFFIRMED. Get the Good Words email series. WHAT'S YOUR CAUSE? Take our quiz. BE INSPIRED. Follow us on Instagram. DO GOOD. Give to The Salvation Army.
Subscribe to Greg Fitzsimmons: https://bit.ly/subGregFitz Josh is the new Karen, a man gets intimate with his vacuum cleaner and Bill Belichick gets snubbed. From brutally honest stories about bombing on stage and the mental toll of stand-up comedy, to wild confessions about bad parenting, cheating exes, petty revenge, and Florida-level insanity, this episode spirals fast and never lets up.The guys rip through everything: pepper-sprayed protesters, Golden Globe pay-to-play politics, Jackie Robinson and sports history, the worst waiter in America, toxic tipping culture, relationship revenge psychology, and why ignoring your ex might be the most devastating move of all. Fitz drops legendary roast stories, unhinged analogies, and brutally funny takes on fame, failure, and modern culture.Dark, hilarious, uncomfortable, smart, and wildly unpredictable. If you love raw comedy podcasts where comedians say the quiet part out loud, this is one of the most chaotic Sunday Papers episodes yet. SPONSORS & SUPPORT THE SHOW Kalshi – The Largest Prediction Market in the U.S.Trade real-world events like elections, sports, weather, and pop culture outcomes.Get $10 to start trading when you sign up using the link below.https://kalshi.com/papers Fabric by Gerber Life – Term Life Insurance Made Simple Protect your family with affordable term life insurance in minutes. No health exam required.https://www.meetfabric.com/papers Quo – The Smarter Business Phone SystemNever miss a call, text, or customer again. Shared numbers, AI summaries, and team messaging in one app. Get 20% off your first six months.https://quo.com/papers Hosted by Greg Fitzsimmons & Mike GibbonsTopics include: stand-up bombing, cancel culture, parenting disasters, cheating revenge, Florida chaos, sports history, and absurd news breakdowns Follow Greg Fitzsimmons: Facebook: https://facebook.com/FitzdogRadio Instagram: https://instagram.com/gregfitzsimmons Twitter: https://twitter.com/gregfitzshow Official Website: http://gregfitzsimmons.com Tour Dates: https://bit.ly/GregFitzTour Merch: https://bit.ly/GregFitzMerch “Dear Mrs. Fitzsimmons” Book: https://amzn.to/2Z2bB82 “Life on Stage” Comedy Special: https://bit.ly/GregFitzSpecial Listen to Greg Fitzsimmons: Fitzdog Radio: https://bit.ly/FitzdogRadio Sunday Papers: http://bit.ly/SundayPapersPod Childish: http://childishpod.com Watch more Greg Fitzsimmons: Latest Uploads: https://bit.ly/latestGregFitz Fitzdog Radio: https://bit.ly/radioGregFitz Sunday Papers: https://bit.ly/sundayGregFitz Stand Up Comedy: https://bit.ly/comedyGregFitz Popular Videos: https://bit.ly/popGregFitz Learn more about your ad choices. Visit megaphone.fm/adchoices